Trace Id is missing
March 27, 2020

Mercedes-Benz R&D creates 'container-driven cars' powered by Microsoft Azure

“Container-driven cars” is the name jokingly proposed by the Connected Cars Platform team at Mercedes-Benz Research and Development (R&D). The concept refers to a microservices-based architecture that relies on containers to fetch and push app updates at the scale of the Internet of Things (IoT). But connected cars are no joke. The software that assists drivers and automates tasks requires precision engineering. Using cloud-native services on Azure, the R&D team freed developers from the limitations of a hardware-driven release cycle, where software updates could happen only once or twice a year. Today developers create new versions as fast as they want and ship new solutions to market in just three months, thanks to a mix of agile tools and managed services on Azure for Kubernetes, APIs, and databases.

Mercedes-Benz Research and Development North America

A history of innovation

Azure was the platform that the Mercedes-Benz R&D teams in North America and Germany trusted to deliver their innovative cloud-based services.

As the R&D arm of Daimler AG, the company has a long history with Microsoft. Daimler is the German multinational automotive corporation whose founders, Gottlieb Daimler and Carl Benz, patented the first automobiles in 1886.

The Connected Cars Platform project is a joint effort by the North American and German R&D teams that specialize in autonomous driving, vehicle infotainment systems, and telematics.

“We needed to modernize the way in-car software was developed,” explains Rodrigo Nunes, a principal software engineer at Mercedes-Benz R&D. The problem was that he and his team couldn’t do it as quickly as they wanted to.

“With Azure, our software development process is so much better now. We’re able to keep the software inside of the cars updated and release new features in weeks instead of months, while preserving quality and security of vehicle and driver data.”

Rodrigo Nunes, principal software engineer, Mercedes-Benz R&D

The challenge of custom hardware

The head unit computer in a car’s dashboard runs the infotainment system, including the navigation system, digital radio tuner, and apps, such as parking and search. This software was flashed as a monolith when a vehicle rolled off the factory line.

Developing an app for the head unit requires a deep understanding of the underlying architecture, a manufacturer’s hardware, and the low-level functionality of the custom hardware and its framework. In the past, changes to this software typically required recertification and requalification of the whole monolith—a time-consuming process, involving rigorous QA and physical test drives. Limited software updates happened over the air and consumed a lot of bandwidth. Other software updates required the head unit to be reflashed, and the image size was so large that the process could take place only in an authorized Mercedes-Benz location.

The development model posed another challenge. As with any connected IoT device, connectivity restrictions limit the types of apps that can be offered. Besides the infotainment system, other vehicle apps use controlled interfaces to interact with a vehicle’s sensors and systems to monitor functions, such as gas level, speed, and GPS location. These apps were created by several development teams, who had to coordinate efforts closely. Release timelines were hampered by the dependencies among the apps created by the external specialists, those built by the head unit manufacturer teams, and others created by domain-specific teams inside the company.

With these limitations, the Connected Cars team updated its software infrequently in a bundle released one or two times a year. ”We wanted to decouple the app developer workflows from the release management cycles of the base system and vehicle,” Nunes says. “That’s where containers come in.”

Speeding up with microservices and containers

Nunes’s team of engineers already used microservices to speed development and deployment in the cloud, but they wanted to update the software more frequently and to implement continuous integration (CI) and continuous deployment (CD) practices inside the car. When Daimler adopted Azure as its cloud provider of choice, the Connected Cars Platform team had a clear path forward. Together with the R&D team from Stuttgart, they engaged with Microsoft and came up with a plan based on containers and Kubernetes orchestration.

Containers offer several key advantages. With a cloud-based containerized platform, the team can develop individual apps and then deliver them in a secure and compact way over the air, using the programming language of their choice. The development teams use Go, JavaScript, C#, Python, or any other programming language that meets the car’s resource constraints.

Containers also provide a software-only solution, which frees developers from the timelines that constrain the hardware teams. Code can be designed and tested as independent microservices comprising a specific function.

The R&D team found a fit in Azure Kubernetes Service (AKS), which could orchestrate the team’s microservices-based platform used for distributing containers to cars over the air. AKS shields developers from the low-level complexities of the Kubernetes setup and operations.

“AKS provides the benefit of a managed service but still allows enough control to the developers,” notes Nunes.

Azure also supports a private registry for container images, Azure Container Registry. “We wanted private registries so we could have control of our containerized apps in our cloud operation,” says Nunes. An Azure Container Registry instance is integrated with a Kubernetes cluster in AKS, and an application is deployed from the image.

“Azure provides a fully managed, rich development platform for app development teams.”

Rodrigo Nunes, principal software engineer, Mercedes-Benz R&D

Container-based car architecture

The container-based car service is designed to begin working when a driver starts the car. The head unit sends a request using the vehicle’s SIM card to connect to the service across a mobile network.

AKS handles the back-end operation, which supports two platforms—one for containerized app creation and the other for app configuration and delivery to the vehicles. The AKS cluster runs the following key services:

  • The orchestration service is the containerized platform for the head unit.
  • The source code repository service is an abstraction of the Git repo providers.
  • The CI service acts as the pipeline management system by abstracting the CI/CD providers and establishing the app’s initial pipeline.
  • The feature configuration service abstracts the tools that provide feature toggling and that are used to provision the project and resources for a new head unit app.
Figure 1. In the connected car architecture, API calls are routed to the back-end services through AKS.

On the app creation side, an app gets built as a container image. The image is pushed to the private registry. The app metadata and configuration details, including deployment specifications and a Kubernetes-like pod specification, are pushed to Azure Cosmos DB and made available for use in the testing stage.

The other AKS pathway supports app configuration and delivery to the vehicles. To minimize latency, a request for an update is sent via Azure Traffic Manager to the closest AKS cluster. Mercedes-Benz R&D currently hosts clusters in North America, Europe, and China and plans to expand to more locations. A head unit orchestration agent compares an update request to the versions already owned by the driver, pulls the correct image from the closest registry, and sends it back to the car.

In the data layer, Azure Cosmos DB plays a key role and provides the geo-redundancy needed for high availability and low latency reads and writes worldwide. Other services depend on the metadata and configuration details that are stored in Azure Cosmos DB. The most critical operations are read-intensive, and the team set up multiple fault-tolerant strategies, including an active-passive model based on geo-redundancy.

The duration of the entire round trip varies, depending on the type of app, size of the image, and other variables, but ranges from a few seconds to a few minutes.

API coordination across teams

APIs have become the standard for connecting apps, data, and services. In this way, they are driving digital transformation in organizations. The Connected Cars Platform team used APIs to connect the dots between the efforts of the North American office (handling the back end) and the teams in Europe.

Azure API Management is the endpoint for updates and provides a secure entry point into the service running on Azure. An API gateway, a component of API Management, accepts API calls and routes them to the back ends, effectively decoupling the back-end APIs from the microservices. The API gateway also verifies the certificates and API keys. All API Management endpoints and any external Azure endpoints declared in a containerized app configuration must have a Daimler-specific domain and certificate issued by an internal Daimler certificate authority.

The team defined the interface contracts that would guide all communication between the vehicles (client) and the Daimler back-end systems (server) that contain key details, such as vehicle capabilities and applications owned. It used Swagger to draft formal definitions of those contract APIs and then imported the file in API Management, which supports the OpenAPI specification.

The back-end team used this process to set up mock policies in API Management. Developers can set a policy on an API so it returns a mocked response, enabling the other teams to proceed with implementation and testing of an API Management instance, even if the back end is not available to send real responses. “API Management was an easy choice for us,” says Nunes. “We simply uploaded our Swagger interface and immediately began unblocking the other vendor teams by providing mock policies until we could hook up the real implementations.”

The vendors were able to continue their development and to test security, connectivity, latency, and other parts of the system. They could more easily uncover potential issues with the in-car orchestration in a scenario closer to future production reality.

When the back-end team was ready to integrate its microservices with API Management, it began replacing the mocked responses with the real thing. “We still leverage this feature from API Management,” says Nunes. “It empowers teams at different stages of development, with different capacities and throughput, and working thousands of miles away from each other. They can build products together by sharing API descriptions via Swagger.”

Other API Management features proved vital to the project, such as certificates, throttling, SSL offloading and the seamless integration with virtual networks, Azure Load Balancer, and Kubernetes ingress. The team continues to create products that can expose different APIs to callers and split the internal and external APIs into different groups.

“Azure provides the global datacenter footprint and geo-replicated services needed to ensure operational efficiency, minimal latency, and smooth deployment of updates across the world.”

Thomas Spatzier, cloud architect, Mercedes-Benz R&D

DevOps on multiple platforms and in multiple regions

In the past, the Connected Cars Platform team followed CI/CD practices based on open-source tools—an Atlassian Bitbucket version control repository, a Jenkins automation server, and a JFrog Artifactory repository manager. But the overall process wasn’t exactly CI. Each in-car app had its own custom pipeline, requiring some manual effort. Software binaries were delivered by many parties, integrated, and tested, with long integration cycles.

Today, the CI/CD pipeline is based on Azure Pipelines. “We adopted DevOps on Azure almost from the beginning,” Nunes says. Azure is now the platform of choice for all the in-car app developers. The pipeline is provided as a template to app development teams as a YAML file. Developers can start using it immediately—without a deep dive into how DevOps works on Azure.

During app integration, the DevOps flow works like this:

  1. Build release. When a developer commits a change, Azure Pipelines is triggered to start a build in the provisioned project. The Docker build process occurs in stages, producing images across multiple architectures—AMD64-based and ARM64-based systems. The container is tagged with the new version and pushed to the private registry. If required, the container is also pushed to separate Container Registry instances in different regions worldwide.
  2. Deploy to integration. Azure Pipelines pushes the manifests and associated images (one for each build) to the QA team's private registry. At the same time, the new candidate version is signed and registered with Azure Cosmos DB.
  3. Test release. The QA team is notified to begin integration testing using the head unit emulator or a test vehicle.
  4. Send to release management. Azure Pipelines triggers an approval process using a manual gate. The approved build is pushed to the production registry, and the version configuration updates are stored in Azure Cosmos DB.


The pipeline can rely on some open-source components, as long as they comply with the company’s policies and are approved by the security team. For example, the CI/CD pipeline pushes Docker images to Azure Container Registry, and the app configuration closely resembles the PodSpec specification for Kubernetes.

The use of containers also simplified testing for developers, who previously had no way to preview an app’s appearance or behavior. Individual functional components were too complex to emulate with software and interacted in complex ways. Developers had to ship a new software build to get it integrated into the head unit image and then flash the image onto a hardware device. If the testing process uncovered a bug, the whole cycle started over, leading to weeks of delay.

Now that developers can build self-contained apps, they can deploy containers into the platform without needing to wait to flash the complete hardware device on scheduled cycles. An emulator that runs on a laptop (or in Azure Virtual Machines) enables developers to run the whole system locally. Individuals can test their apps without needing access to hardware test benches. After these automated tests are run in simulated vehicles in the cloud, developers can map their app to a test car. The test car’s head unit connects to the cloud during vehicle ignition, locates the newly assigned app, retrieves its metadata, pulls the described container image, and starts the app.

In addition, Azure provided the global datacenters needed to reduce latency during deployments and app downloads. The container images and the metadata need to be physically near vehicles during deployment. Azure Traffic Manager directs update requests to the nearest Azure datacenter, and the team uses Azure Container Registry and Azure Cosmos DB multi-region replication to ensure that deployments go smoothly and images are stored as close as possible to the fleet.

An X-ray of operations

To continuously monitor the container-based car service, the team built an operations dashboard using Azure Monitor. “It gives us a full X-ray of operations, from the Kubernetes plane all the way to each individual service running in production,” says Nunes. Previously, the team used Elasticsearch for logging, monitoring, and alerting, while a dedicated vendor handled most of the DevOps tasks and datacenter support.

“As someone who worked with ELK (Elasticsearch Logstash and Kibana) for years, I had to get used to the change,” Nunes admits, “but Azure Monitor really exceeded our expectations.”

In particular, he appreciates Azure Monitor for its containers feature. It monitors the performance of container workloads deployed on AKS and collects memory and processor metrics from controllers, nodes, and containers. Plus, when a developer enables monitoring in a cluster, the add-on for AKS monitoring automatically kicks in, which deploys the DaemonSet for OS agents.

Next steps

The Connected Cars team knew it could develop new vehicle software and services much faster, if only they could get past a few roadblocks. By containerizing the head unit apps using Azure Container Registry and AKS, the team can now speed along.

“All in all, Azure Container Registry and Azure Kubernetes Service were great pillars to implement improvements to the development workflow for the separate head unit application teams,” notes Nunes.

The team also met the success metrics established as the project’s two goals: to build a fully functional prototype in 24 hours and to ship a new solution to the market in just three months.

The next step is to containerize and decompose more of the existing base software platform. “More than just the infotainment system should be decomposed, containerized, and updateable,” Nunes says. “The goal is essentially to be able to update all in-car software, beyond just the head unit computer.”    

“The Azure ecosystem gave us useful capabilities that the developers worked into their system, such as Azure Container Registry tasks, AKS integration, alerting, monitoring, and logging. Plus, Azure Pipelines made it possible to handle deployment and other tasks in a repeatable way, giving the platform team centralized control and standards enforcement.”

Rodrigo Nunes, principal software engineer, Mercedes-Benz R&D

Take the next step

Fuel innovation with Microsoft

Talk to an expert about custom solutions

Let us help you create customized solutions and achieve your unique business goals.

Drive results with proven solutions

Achieve more with the products and solutions that helped our customers reach their goals.

Follow Microsoft