docker

Tech Stack

Tech Stack

We aim to create an environment which allows to innovate fast. Every squad can introduce new processes, tools and technologies, Continue reading →

Posted by Mathias on 14 Jan 2018 in Architecture
JAX London 2017

JAX London 2017

In October 2017 some of the top tech companies got together in the JAX London a conference about JAVA, microservices, continuous delivery and DevOps, and foodora tech was also there to check the latest releases and trends for the future of JAVA.

Foodora tech goes JAX London

After 2 days, 13 presentations and lot’s of coffee we returned home with a few new concepts, t-shirts, stickers and the awareness that foodora is in the selected group of companies which attended the event that is using almost all the cutting-edge technologies that were discussed at the event.

Highlights

Resilient microservices

@meteatamel Google Developer Advocate guided us into a tour on the architecture of resilient microservices with Kubernetes, the managed kubernetes that Google Cloud Engine (GKE) offers is extremely easy to setup and might be a good option for small teams that doesn’t have System Engineers to manage a cluster. At foodora nothing new there since we are already using this architecture since January 2017,  but the presentation was a nice review and reaffirms that we are going in the right direction.

Java and containers

@danielbryantuk consultant and author of the book Containerizing Continuous Delivery in Java (O’REILLY, 2018) reviewed the flows of CI/CD and discussed the particularities of Java and containers, Oracle is going all in supporting containerized apps with Java 9, the ability of run an optimized JRE with only the resources that your application needs is on our way (see java 9 modules)

New release cycles of java

@DonaldOJDK Senior Director of Product Management for Oracle talked about the new release cycles of java and how the language will move forward faster than ever with a new release each 6 months. There will be versions that will not be maintained for longer which will force the companies that choose this path to always upgrade the java version of their projects, but if your company is more conservative you still have the option to stay in the versions that are long time supported.

It’s a common agreement that our team would enjoy it more if the speakers had dive deep into the topics, but it’s also understandable that due to the small duration of the sessions it’s challenging to do so. Now it’s time to share the knowledge internally with teams and prepare towards the future of fast pace java releases.

Happy coding.

Posted by Vitor on 15 Oct 2017 in Conferences
Scaling docker clusters using AWS ECS

Scaling docker clusters using AWS ECS

As many people these days at foodpanda we are also moving towards microservices approach using docker containers. Docker allows to run applications in an isolated way, wherever you need. However, running many of them is hard to scale and deploy. Did you already hear about Elastic Container Service (ECS) by Amazon Web Services ? after reading this article you should have basic knowledge about what this service can offer to you. Last but not least, I am also comparing it with kubernetes.

Some History

Microservice is for some time already, one of these cool words any IT developer wants to hear in a conference or in a job interview. Microservices wouldn’t be the same without docker and also docker wouldn’t be the same without a tool to manage its containers. Some of these container tools are kubernetes and ECS by AWS.

At foodpanda, ECS was a product which became more interesting when the new application Load Balancer (ELB) was released. This new ELB allows to route traffic on dynamic ports inside EC2 instances. In the past, the classic load balancer was only able to route traffic to the same port on all the instances of the Auto Scaling Group (ASG).

So, “what has to do with ECS?” This new load balancer is able to route traffic to different ports exposed by multiple containers running for same service inside an instance. This Application Load Balancer not only reduces the chances of wasting resources but also increases the speed of scaling up your application (if no boot time required).

The following picture shows now how multiple containers for same purpose (service) can be running and scaled inside same ec2 instance.

elastic container service cluster

ECS Components

In a nutshell these are the components of ECS:

  • ECS agent:
    • Daemon by amazon used for connecting instances to a cluster.
    • If installed, ec2 automatically is attached to default cluster.
  • Task Definition:
    • Describes the container(s), volume(s)… of an ECS Service. Task definitions have revisions.
    • A Task definition should group containers which share common purpose.
    • Task definitions can only have up to 10 container definitions.
  • Service:
    • Used for configuring the amount of tasks definitions desired to be running.
    • Defines the scaling and deployment rules for the tasks definitions.
    • Used by the ECS Scheduler in order to have the amount of healthy task definitions running.
  • Service Scheduler:
    • Internal service by amazon (hidden for aws users or apis) for managing the cluster.
  • Cluster:
    • It simply holds the elements above.
    • Clusters can contain multiple different container instance types.
    • Clusters are region-specific.
    • Container instances can only be a part of one cluster at a time.

ecs components

Deployment

ECS is fully supported by both api and cli so you can easily integrate the deployment of new task definitions (container images) inside your CI tool.

Don’t worry if by mistake you deploy a broken release, ECS scheduler uses the ELB health check in order to evaluate whether the task definition you are trying to deploy is healthy enough or not. In case it fails, ECS schedulers stops the deployment.

ecs deployment diagram

Scaling

ECS scheduler only scales the amount of task definitions, so it only starts/stops containers from a limited amount of instances running on your Auto Scaling Group (ASG). This means that you still have to configure instance scaling policies for that ASG. Luckily you can use ECS metrics such as “reserved_memory” or “reserved_cpu” in order to increase/reduce number of instances available in the cluster. On the other hand, ECS scheduler can scale not only based on classic metrics such as CPU consumed by your containers but also based on custom metrics such as messages available from a sqs queue.

ECS compared by Kubernetes (K8s) v1.4.6 (latest stable release)

The following table shows what ECS can and cannot do compared by another cluster manager called kubernetes. Notice that it does not try to show what both can do:

Feature ECS K8s
Service discovery
Secret/Config management
Custom metric service scaling (containers)
Instance (server) scaling based on cluster metrics
Requires extra instance for cluster management
Can run on dev env
Requires extra host for Service Load Balancer

To me, both have its pros and cons so it is up to project requirements which one to use. However, if your infra is really tied to aws services, ECS is probably the best option since you might run yourself the missing services.

References

Posted by Vicente on 14 Nov 2016 in AWS, DevOps