Containers can help energy, oil and gas companies accelerate adoption of cloud-based HPC.

More companies are adopting cloud-based high performance computing (HPC), especially in industries that require significant computational power. The energy, oil and gas markets are no exception. After all, they rely on advanced computer-aided engineering (CAE), advanced data analytics, seismic processing, and other data- and computationally-demanding applications. But implementing cloud-based HPC will require new approaches that deliver greater flexibility and ease deployments. One such solution? Containers.

oil pump

IDC forecasts that the global HPC market will reach about $15.2 billion by 2019. The market research firm also estimates that the formative high-performance data analysis (HPDA) market – the market for big data requiring HPC technology – will reach $4.9 billion in 2019. HPDA includes data-intensive simulations and new methods for handling advanced data analytics, and are the types of capabilities the energy, oil and gas markets need.

So what’s the advantage of containers for HPC? With traditional HPC systems, compute, network, power, data, and any associated technologies need to be architected and developed, designed and implemented. With a container-based solution that’s running in the cloud, only the application needs to be developed, as the infrastructure can be agnostic and always available. That may seem idyllic, but hear me out.

Historically, HPC systems have had to be owned or rented, with installation and ongoing maintenance complicated by power and space requirements and the specialized knowledge needed to keep the systems running and in constant communication. With the advent of public cloud and the vast scale that can be rented whenever and wherever its needed, enterprises are moving more towards hybrid systems than ever before. Solutions like Red Hat CloudForms let companies manage these hybrid systems from a single pane of glass, removing the need for multiple management and monitoring solutions and reducing the workload on technical staff.

By blending on-premise and cloud HPC solutions, IT and cloud professionals can create more efficiencies. But they are still limited by the needs of the software code itself. In fact, the code is one of the greatest challenges for improving HPC. That’s because code, as it is written today, cannot be easily moved between dissimilar infrastructure, or applied instantly to large parallel compute instances. By using containers that decouple the code from the hardware, they’ll be able to create an application package containing an entire software stack – including shared libraries, operating system files, user variables, application codes, and any dependencies – into an easily deployable and moveable image that contains everything needed to run that application across multiple platforms.

It is likely containers, even in the short term, will start to occupy the space between full on virtualization and hardware-based HPC systems. In the long term, I expect to see a wholesale shift towards containerization. Any performance degradation can be mitigated with effective parallelization and by simply assigning more hardware to the same job, at a scale that only public clouds can deliver.

The programming challenges of modern HPC will need to be addressed as we move into the container age. In order to achieve the expected performance, millions of threads will need to be working together, and we will need new methods of programming for this to operate. APIs will be required to work with the operating systems, and new development tools will be needed in order to provide the correct functions. Also, inter-nodal communications will need to be simplified and become more adaptable. The days of a specific type of technology handling all of the data in a low-latency fashion are numbered, and instead HPC nodes will have to be able to talk to each other regardless of the underlying infrastructure. They’ll have to be able work with, and within, the constraints of a more flexible, public cloud like network model.

There’s plenty more to talk about regarding cloud-based HPC – security features, performance, and data requirements are just a few, so look for future posts on those topics and more in the Red Hat Vertical Industries Blog. Meanwhile, let us know where your company stands on cloud-based HPC and containers in the comments section below.

  1. Dave, great article, thanks, with a great vision on containerization of HPC applications, which is already here. While Docker containers for example are made for micro-service type of enterprise applications, we have taken them and developed all kinds of HPC-related features on top, layer by layer, such as MPI, RDMA, Infiniband, GPUs, remote viz, virtual desktop, etc, now accessible and usable through any browser, in any cloud, private, public, hybrid. And what you are envisioning doesn’t just hold for energy, oil and gas, but for any compute intensive application area, like CAE, Comp Bio, Finance, and even big data analytics, and machine learning. The beauty with these containers especially for HPC is that these containers don’t consume performance themselves like virtual machines often do. Our own benchmarks at UberCloud show that the performance for HPC containers is very similar to that of bare metal HPC, even for hundreds of cores, just underlining what you are writing in your article.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s