Language Selection

English French German Italian Portuguese Spanish

Servers: Hadoop, Amazon Rivals, Red Hat/IBM, Kubernetes, OpenStack and More

Filed under
Server
  • Breaking Out of the Hadoop Cocoon

    The announcement last fall that top Hadoop vendors Cloudera and Hortonworks were coming together in a $5.2 billion merger – and reports about the financial toll that their competition took on each other in the quarters leading up to the deal – revived questions that have been raised in recent years about the future of Hadoop in an era where more workloads are moving into public clouds like Amazon Web Services (AWS) that offer a growing array of services that many of the jobs that the open-source technology already does.

    Hadoop gained momentum over the past several years as an open-source platform to collect, store and analyze various types of data, arriving as data was becoming the coin of the realm in the IT industry, something that has only steadily grown since. As we’ve noted here at The Next Platform, Hadoop has evolved over the years, with such capabilities as Spark in-memory processing and machine learning being added. But in recent years more workloads and data have moved to the cloud, and the top cloud providers, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform all offer their own managed services, such as AWS’ Elastic Map Reduce (EMR). Being in the cloud, these services also offer lower storage costs and easier management – the management of the infrastructure is done by the cloud provider themselves.

  • A guide for database as a service providers: How to stand your ground against AWS – or any other cloud

    NoSQL database platform MongoDB followed suit in October 2018 announcing a Server Side Public License (SSPL) to protect “open source innovation” and stop “cloud vendors who have not developed the software to capture all of the value while contributing little back to the community.” Event streaming company, Confluent issued its own Community License in December 2018 to make sure cloud providers could no longer “bake it into the cloud offering, and put all their own investments into differentiated proprietary offerings.”

  • The CEO of DigitalOcean explains how its 'cult following' helped it grow a $225 million business even under the shadow of Amazon Web Services

    DigitalOcean CEO Mark Templeton first taught himself to code at a small hardwood business. He wanted to figure out how to use the lumber in the factory most efficiently, and spreadsheets only got him so far.

    "I taught myself to write code to write a shop floor control and optimization system," Templeton told Business Insider. "That allowed us to grow, to run the factory 24 hours a day, all these things that grow in small business is new. As a self-taught developer, that's what launched me into the software industry."

    And now, Templeton is learning to embrace these developer roots again at DigitalOcean, a New York-based cloud computing startup. It's a smaller, venture-backed alternative to mega-clouds like Amazon Web Services, but has found its niche with individual programmers and smaller teams.

  • IBM’s Big-Ticket Purchase of Red Hat Gets a Vote of Confidence From Wall Street
  • How Monzo built a bank with open infrastructure

    When challenger bank Monzo began building its platform, the team decided it would get running with container orchestration platform Kubernetes "the hard way". The result is that the team now has visibility into outages or other problems, and Miles Bryant, platform engineer at Monzo, shared some observations at the bank at the recent Open Infrastructure Day event in London.

    Finance is, of course, a heavily regulated industry - and at the same time customer expectations are extremely exacting. If people can't access their money, they tend to get upset.

  • Kubernetes Automates Open-Source Deployment

    Whether for television broadcast and video content creation, delivery or transport of streamed media, they all share a common element, that is the technology supporting this industry is moving rapidly, consistently and definitively toward software and networking. The movement isn’t new by any means; what now seems like ages ago, in the days where every implementation required customized software on a customized hardware platform has now changed to open platforms running with open-source solution sets often developed for open architectures and collectively created using cloud-based services.

  • Using EBS and EFS as Persistent Volume in Kubernetes

    If your Kubernetes cluster is running in the cloud on Amazon Web Services (AWS), it comes with Elastic Block Storage (EBS). Or, Elastic File System (EFS) can be used for storage.

    We know pods are ephemeral and in most of the cases we need to persist the data in the pods. To facilitate this, we can mount folders into our pods that are backed by EBS volumes on AWS using AWSElasticBlockStore, a volume plugin provided by Kubernetes.

    We can also use EFS as storage by using efs-provisioner. Efs-provisioner runs as a pod in the Kubernetes cluster that has access to an AWS EFS resource.

  • Everything You Want To Know About Anthos - Google's Hybrid And Multi-Cloud Platform

    Google's big bet on Anthos will benefit the industry, open source community, and the cloud native ecosystem in accelerating the adoption of Kubernetes.

  • Raise a Stein for OpenStack: Latest release brings faster containers, cloud resource management

    The latest OpenStack release is out in the wilds. Codenamed Stein, the platform update is said to allow for much faster Kubernetes deployments, new IP and bandwidth management features, and introduces a software module focused on cloud resource management – Placement.

    In keeping with the tradition, the 19th version of the platform was named Stein after Steinstraße or "Stein Street" in Berlin, where the OpenStack design summit for the corresponding release took place in 2018.

    OpenStack is not a single piece of software, but a framework consisting of an integration engine and nearly 50 interdependent modules or projects, each serving a narrowly defined purpose, like Nova for compute, Neutron for networking and Magnum for container orchestration, all linked together using APIs.

  • OpenStack Stein launches with improved Kubernetes support

    The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

    While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

  • Community pursues tighter Kubernetes integration in Openstack Stein

    The latest release of open source infrastructure platform Openstack, called 'Stein', was released today with updates to container functionality, edge computing and networking upgrades, as well as improved bare metal provisioning and tighter integration with popular container orchestration platform Kubernetes - led by super-user science facility CERN.

    It also marks roughly a year since the Openstack Foundation pivoted towards creating a more all-encompassing brand that covers under-the-bonnet open source in general, with a new umbrella organisation called the Open Infrastructure Foundation. Openstack itself had more than 65,000 code commits in 2018, with an average of 155 per day during the Stein cycle.

  • Why virtualisation remains a technology for today and tomorrow

    The world is moving from data centres to centres of data. In this distributed world, virtualisation empowers customers to secure business-critical applications and data regardless of where they sit, according to Andrew Haschka, Director, Cloud Platforms, Asia Pacific and Japan, VMware.

    “We think of server and network virtualisation as being able to enable three fundamental things: a cloud-centric networking fabric, with intrinsic security, and all of it delivered in software. This serves as a secure, consistent foundation that drives businesses forward,” said Haschka in an email interview with Networks Asia. “We believe that virtualisation offers our customers the flexibility and control to bring things together and choose which way their workloads and applications need to go – this will ultimately benefit their businesses the most.”

  • Happy 55th birthday mainframe

    7 April marked the 55th birthday of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

  • Red Hat strategy validated as open hybrid cloud goes mainstream

    “Any products, anything that would release to the market, the first filter that we run through is: Will it help our customers with their open hybrid cloud journey?” said Ranga Rangachari (pictured), vice president and general manager of storage and hyperconverged infrastructure at Red Hat.

    Rangachari spoke with Dave Vellante (@dvellante) and Stu Miniman (@stu), co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the Google Cloud Next event. They discussed adoption of open hybrid cloud and how working as an ecosystem is critical for success in solving storage and infrastructure problems (see the full interview with transcript here). (* Disclosure below.)

More in Tux Machines

today's howtos

All Linux, all the time: Supercomputers Top 500

Starting at the top, two IBM-built supercomputers, Summit and Sierra, at the Department of Energy's Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California, respectively to the bottom -- a Lenovo Xeon-powered box in China -- all of them run Linux. Linux supports more hardware architectures than any other operating system. In supercomputers, it supports both clusters, such as Summit and Sierra, the most common architecture, and Massively Parallel Processing (MPP), which is used by the number three computer Sunway TaihuLight. When it comes to high-performance computing (HPC), Intel dominates the TOP500 by providing processing power to 95.6% of all systems included on the list. That said, IBM's POWER powers the fastest supercomputers. One supercomputer works its high-speed magic with Arm processors: Sandia Labs' Astra, an HPE design, which uses over 130-thousand Cavium ThunderX2 cores. And, what do all these processors run? Linux, of course. . 133 systems of the Top 500 supercomputers are using either accelerator or co-processor setups. Of these most are using Nvidia GPUs. And, once more, it's Linux conducting the hardware in a symphony of speed. Read more

Red Hat and SUSE Leftovers

  • Are DevOps certifications valuable? 10 pros and cons
  • Kubernetes 1.15: Enabling the Workloads
    The last mile for any enterprise IT system is the application. In order to enable those applications to function properly, an entire ecosystem of services, APIs, databases and edge servers must exist. As Carl Sagan once said, “If you wish to make an apple pie from scratch, you must first invent the universe.” To create that IT universe, however, we must have control over its elements. In the Kubernetes universe, the individual solar systems and planets are now Operators, and the fundamental laws of that universe have solidified to the point where civilizations can grow and take root. Discarding the metaphor, we can see this in the introduction of Object Count Quota Support For Custom Resources. In English, this enables administrators to count and limit the number of Kubernetes resources across the broader ecosystem in a given cluster. This means services like Knative, Istio, and even Operators like the CrunchyData PostgreSQL Operator, the MongoDB Operator or the Redis Operator can be controlled via quota using the same mechanisms that standard Kubernetes resources have enjoyed for many releases. That’s great for developers, who can now be limited by certain expectations. It would not benefit the cluster for a bad bit of code to create 30 new PostgreSQL clusters because someone forgot to add a “;” at the end of a line. Call them “guardrails” that protect against unbounded object growth in your etcd database.
  • Red Hat named HPE’s Partner of the Year at HPE Discover 2019
    For more than 19 years, Red Hat has collaborated with HPE to develop, deliver and support trusted solutions that can create value and fuel transformation for customers. Our work together has grown over these nearly two decades and our solutions now include Linux, containers and telecommunications technologies, to name just a few. As a testament to our collaboration, HPE has named Red Hat the Technology Partner of the Year 2019 for Hybrid Cloud Solutions.
  • Demystifying Containers – Part II: Container Runtimes
    This series of blog posts and corresponding talks aims to provide you with a pragmatic view on containers from a historic perspective. Together we will discover modern cloud architectures layer by layer, which means we will start at the Linux Kernel level and end up at writing our own secure cloud native applications. Simple examples paired with the historic background will guide you from the beginning with a minimal Linux environment up to crafting secure containers, which fit perfectly into todays’ and futures’ orchestration world. In the end it should be much easier to understand how features within the Linux kernel, container tools, runtimes, software defined networks and orchestration software like Kubernetes are designed and how they work under the hood.
  • Edge > Core > Cloud: Transform the Way You Want
    For more than 25 years, SUSE has been very successful in delivering enterprise-grade Linux to our customers. And as IT infrastructure has shifted and evolved, so have we. For instance, we enabled and supported the move to software-defined data centers as virtualization and containerization technologies became more prevalent and data growth demanded a new approach.
  • SUSE OpenStack Cloud Technology Preview Takes Flight
    We are pleased to announce that as of today we are making a technology preview of a containerized version of SUSE OpenStack Cloud available that will demonstrate a future direction for our product. The lifecycle management for this technology preview is based on an upstream OpenStack project called Airship, which SUSE has been using and contributing to for some time. This follows our open / open policy of upstream first and community involvement.

NSA Back Doors in Windows Causing Chaos While Media is Obsessing Over DoS Linux Bug

  • U.S. Government Announces Critical Warning For Microsoft Windows Users
    The United States Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) has gone public with a warning to Microsoft Windows users regarding a critical security vulnerability. By issuing the "update now" warning, CISA has joined the likes of Microsoft itself and the National Security Agency (NSA) in warning Windows users of the danger from the BlueKeep vulnerability. This latest warning, and many would argue the one with most gravitas, comes hot on the heels of Yaniv Balmas, the global head of cyber research at security vendor Check Point, telling me in an interview for SC Magazine UK that "it's now a race against the clock by cyber criminals which makes this vulnerability a ticking cyber bomb." Balmas also predicted that it will only be "a matter of weeks" before attackers started exploiting BlueKeep. The CISA alert appears to confirm this, stating that it has, "coordinated with external stakeholders and determined that Windows 2000 is vulnerable to BlueKeep." That it can confirm a remote code execution on Windows 2000 might not sound too frightening, this is an old operating system after all, it would be unwise to classify this as an exercise in fear, uncertainty and doubt. Until now, the exploits that have been developed, at least those seen in operation, did nothing more than crash the computer. Achieving remote code execution brings the specter of the BlueKeep worm into view as it brings control of infected machines to the attacker.
  • Netflix uncovers SACK Panic vuln that can bork Linux-based systems