Migration to AWS Aurora RDS

Migration to AWS Aurora RDS

At foodora we recently switched our production RDS from mysql to Aurora. With this article we share how we did it and what we learned.

Continue reading →

Posted by Riccardo on 1 Aug 2017 in AWS, DevOps
3 Things I Learned Extracting a Service

3 Things I Learned Extracting a Service

A few months ago, foodora’s “Search and Discovery” team extracted an elasticsearch population service from a PHP monolith (I won’t call it a microservice because our population service is still a large and complex application).

Here are three bits of advice that I wish I’d heard before we started:

1: Mark deprecations as early as possible.

Even if your internal communications are fantastic, when several teams are contributing to the code-base of a monolith it’s easy to forget what everyone is working on.

Since creating the new service might take some time (and even if it just takes one iteration of your sprint-cycle) it’s worth noting ahead-of-time that extraction work is in progress. It doesn’t take much effort to deprecate the things you’ll replace, optionally with a note about the service that will replace it and who the person / team in charge is.

This helps in a number of ways:

  • It increases the chances that anyone who has to modify or make use of any concerned classes will be aware of the upcoming change.
  • Hopefully, anyone who needs to modify or make use of the deprecated classes will keep the team creating the service in the loop know about changes that might also need to be mirrored in the new service.
  • Having a pull-request with a set of deprecations provides an easy “watch-list” for logic changes that might occur during the service development process.
  • Marking deprecations can increase your understanding of the scope and complexity of the operations that you’re trying to extract.

2: Automation will only take you so far.

In a scripting language like PHP, recursive dependency mapping can be very complex, especially with the potential for “magic” method usage, dependency injection, direct object creation, Symfony-framework containers and interface constants.

Tracing required and non-required dependencies can be easily automated with an IDE or on the command line (e.g. grepping for classes referenced with PHP’s “use” keyword). I spent some time creating scripts to help with some of the more repetitive elements of dependency tracing. While this did speed up the process of removing unused files, in the end I found that there’s no real substitute for simply going through the code that your service will replace class-by-class and often method-by-method.

This was tedious at times, but it created a much better understanding of the service’s functionality. It was also a prime opportunity for some (test-driven) refactoring.

3: Make integration tests a priority.

We started thinking about making all existing integration tests pass quite early in our development process, but I had no idea about the complexity of this task.

Next time I extract a service I’m going to try to divide this task up into smaller tickets, each of which could have been focused on more specific sub-tasks such as:

  • Ensuring existing integration tests have correct coverage.
  • Creating an area-specific (e.g. city or country) master switch to toggle old and new functionality.
  • Packaging the new service in a Docker container, perhaps even in a preliminary, “mocked” state.
  • Creating an automated process to pull down a “Dockerized” container of the service before running tests.
  • Making sure that all output, especially from error conditions, are handled correctly by the existing application.

Overall, this process provided a lot of potential for optimising the interface between the monolith and the new service as well as finding the minimum system requirements for our service.

What’s next for “Search and Discovery”?

We’re currently working on a new Scala app that will wrap the search side of elasticsearch, with a focus on auto-generation of SDKs, controllers and documentation from swagger files using tools such as Zalando’s api-first-hand. Some members of our team are actively contributing to the api-first-hand project as well. I think that’s worthy of at least one blog post, coming soon…

Posted by Stuart on 30 Mar 2017 in Architecture, DevOps, Team
Scaling docker clusters using AWS ECS

Scaling docker clusters using AWS ECS

As many people these days at foodpanda we are also moving towards microservices approach using docker containers. Docker allows to run applications in an isolated way, wherever you need. However, running many of them is hard to scale and deploy. Did you already hear about Elastic Container Service (ECS) by Amazon Web Services ? after reading this article you should have basic knowledge about what this service can offer to you. Last but not least, I am also comparing it with kubernetes.

Some History

Microservice is for some time already, one of these cool words any IT developer wants to hear in a conference or in a job interview. Microservices wouldn’t be the same without docker and also docker wouldn’t be the same without a tool to manage its containers. Some of these container tools are kubernetes and ECS by AWS.

At foodpanda, ECS was a product which became more interesting when the new application Load Balancer (ELB) was released. This new ELB allows to route traffic on dynamic ports inside EC2 instances. In the past, the classic load balancer was only able to route traffic to the same port on all the instances of the Auto Scaling Group (ASG).

So, “what has to do with ECS?” This new load balancer is able to route traffic to different ports exposed by multiple containers running for same service inside an instance. This Application Load Balancer not only reduces the chances of wasting resources but also increases the speed of scaling up your application (if no boot time required).

The following picture shows now how multiple containers for same purpose (service) can be running and scaled inside same ec2 instance.

elastic container service cluster

ECS Components

In a nutshell these are the components of ECS:

  • ECS agent:
    • Daemon by amazon used for connecting instances to a cluster.
    • If installed, ec2 automatically is attached to default cluster.
  • Task Definition:
    • Describes the container(s), volume(s)… of an ECS Service. Task definitions have revisions.
    • A Task definition should group containers which share common purpose.
    • Task definitions can only have up to 10 container definitions.
  • Service:
    • Used for configuring the amount of tasks definitions desired to be running.
    • Defines the scaling and deployment rules for the tasks definitions.
    • Used by the ECS Scheduler in order to have the amount of healthy task definitions running.
  • Service Scheduler:
    • Internal service by amazon (hidden for aws users or apis) for managing the cluster.
  • Cluster:
    • It simply holds the elements above.
    • Clusters can contain multiple different container instance types.
    • Clusters are region-specific.
    • Container instances can only be a part of one cluster at a time.

ecs components


ECS is fully supported by both api and cli so you can easily integrate the deployment of new task definitions (container images) inside your CI tool.

Don’t worry if by mistake you deploy a broken release, ECS scheduler uses the ELB health check in order to evaluate whether the task definition you are trying to deploy is healthy enough or not. In case it fails, ECS schedulers stops the deployment.

ecs deployment diagram


ECS scheduler only scales the amount of task definitions, so it only starts/stops containers from a limited amount of instances running on your Auto Scaling Group (ASG). This means that you still have to configure instance scaling policies for that ASG. Luckily you can use ECS metrics such as “reserved_memory” or “reserved_cpu” in order to increase/reduce number of instances available in the cluster. On the other hand, ECS scheduler can scale not only based on classic metrics such as CPU consumed by your containers but also based on custom metrics such as messages available from a sqs queue.

ECS compared by Kubernetes (K8s) v1.4.6 (latest stable release)

The following table shows what ECS can and cannot do compared by another cluster manager called kubernetes. Notice that it does not try to show what both can do:

Feature ECS K8s
Service discovery
Secret/Config management
Custom metric service scaling (containers)
Instance (server) scaling based on cluster metrics
Requires extra instance for cluster management
Can run on dev env
Requires extra host for Service Load Balancer

To me, both have its pros and cons so it is up to project requirements which one to use. However, if your infra is really tied to aws services, ECS is probably the best option since you might run yourself the missing services.


Posted by Vicente on 14 Nov 2016 in AWS, DevOps
CloudFormation @ foodpanda

CloudFormation @ foodpanda

CloudFormation is a declarative and flexible language in JSON format that describes your AWS infrastructure. You can enumerate what AWS resources, configuration values and interconnections you need in a template and then let AWS Cloudformation do the rest. Continue reading →

Posted by Jens on 12 Jan 2016 in AWS, DevOps