Posted on

UBITECH exhibits the MAESTRO distributed apps composition and cloud services orchestration platform at DockerCon 2019

Sponsoring the 3-day technology conference DockerCon 2019(https://www.docker.com/dockercon/) that takes place from April 29 – May 2 at Moscone West in San Francisco, UBITECH has a dedicated exhibition booth at the Ecosystem Expo of the conference, for presenting and demonstrating the sophisticated MAESTRO platform that enables distributed applications composition and cloud services orchestration. The MAESTRO platform (themaestro.net) is an advanced developer framework for cloud orchestration and infrastructure automation, that gives you the power to design, deploy, and manage cloud-native containerized components in both public and private cloud environments. Built with IaaS (Infrastructure-as-a-Service) abstraction, the MAESTRO platform lets you create easy-to-manage, easy-to-scale workflows with Docker Compose applications. It comes with advanced off-the-shelf features to support extensive monitoring, security enforcement, elasticity management, and operational analytics.

In particular, MAESTRO provides you with a pragmatic, efficient approach to addressing the various sophisticated challenges of both container and multi-cloud adoption such as optimal infrastructure provisioning, seamless monitoring, autonomic elasticity management, and security. Through a set of intelligent orchestration mechanisms, the MAESTRO framework lets you optimize the placement of cloud-native applications based on user-defined constraints. Thanks to an IaaS abstraction layer, MAESTRO deploys applications in many different IaaS backends like OpenStack, Amazon, and Google Cloud. Before deployment, you can configure and activate a set of your preferred monitoring metrics or even attach your own custom monitoring probes. MAESTRO also lets you create rich expressions that trigger scaling or security events during runtime; these events are tackled by configurable handlers. Finally, deployed applications are automatically benchmarked though a sophisticated profiling engine, with regards to their computational and memory intensiveness.