MicroXchg 2018

MicroXchg 2018

Last week our team was at MicroXchg 2018, checking out the best that the microservices architecture has to offer.

From the simple synchronous communication to the most complex event storming, the conference offered very interesting insights.

Event driven approach for communication between microservices seems to be the most commonly used method.

Lutz Huehnken explains in his talk “Designing reactive system with event storming“, on how to design your system using events, domain driven design and commands. 

Lutz promotes the idea of focusing on the events first, describing the system in terms of the things that happen making concepts explicit and only then explore the commands that trigger these events. He emphasizes that one should not jump to conclusions, instead let the aggregates emerge from the flow of events and commands.

Service mesh is emerging as a dedicated infrastructure layer for making service-to-service communication safe, fast, and reliable

Liz Rice, Daniel Bryant and Owen Garret, mentioned how service mesh can improve the service-to-service communication by providing at infrastructure level traffic control, timeouts / deadlines, circuit-breaker functionality and others.

The idea is that a service mesh act on layers 5(session) and 6(presentation) being able to remove this complexity entirely from the layer 7(application).

Some of the options presented to help with that are:

Security is always a concern

That security is always a concern everyone knows, with the microservices architecture nothing changes in this regard. Andrew Martin showed in his talk “Continuous Kubernetes Security” how a backdoor in one of your containers can cause a big damage.

Both Andrew Martin and Liz Rice used their presentation to give some tips on how to improve your security when using containers:

  • Always apply patches on your docker images
  • Make automatic scan for vulnerabilities
  • Reduce docker image size (so you have a smaller attack surface)
  • Encrypt connections between containers

Many other topics were discussed at the conference where we learned a lot for sure. The most important thing to realize is that Pandora is in the right direction when it comes to microservices architecture.

The talks are available in the conference Youtube channel MicroXchg 2018.

Posted by carminato on 28 Mar 2018 in Architecture, Conferences, Team

Tech Talks and the motivation behind it

Tech Talk is the tool we use to integrate teams, share knowledge and keep the tech up-to-date.

It’s not a secret that a company which aims to be on the top spot for good engineers, needs to be up-to-date in the subject of technology.

Some companies like Google and AWS use Tech Talks in order to achieve this goal and we really like how Google defines it:

Google Tech Talks is a grass-roots program at Google for sharing information of interest to the technical community.  At its best, it’s part of an ongoing discussion about our world featuring top experts in diverse fields.  Presentations range from the broadest of perspective overviews to the most technical of deep dives, on topics well-established to wildly speculative.

This is basically why we do it, to share the information about tech-related topics. The topics can range from specific technologies, design/architectural decisions being made, or maybe some technical challenges within our technical community.

How we do it?

In a team full of skilled engineers you always need incentive to motivate everyone to share, this motivation comes from beer and pizza. At the beginning of every month we send out a Call 4 papers asking the engineers to submit their presentations, if there are too many submissions, a poll is created on Slack to decide what will be presented. The Tech Talk takes place at the end of the same month with the most voted presentations.

At the end every presentation in the Tech Talk we have a Q&A session of at least 15 minutes. During this time the team and speaker can exchange their ideas and opinions about the presented topic in detail. This is also the time to check which points about the presented topic could fit and be applied at Foodora.

What did we learn so far?

The initial motivation was to provide a face-to-face means of knowledge sharing among different teams. We have come to realize that the Tech Talk effect is an engaging atmosphere of dialog and discussion, which is very difficult to achieve in other settings.

It is also a very good ice-breaking environment for new joiners and it serves as a great opportunity for brainstorming. It  also gives insights from the diverse fields of expertise that everyone brings to the room.


If you interested in joining this amazing team and be part of this experience? Check out our open positions here!

Posted by carminato on 5 Feb 2018 in Office, Team
Tech Team Q&A

Tech Team Q&A

During the interview process candidates usually have a lot of questions about the team at foodora. This post tries to address the most common questions. Continue reading →

Posted by Mathias on 1 Jan 2018 in Team
How Quality Professionals should prepare for the future?

How Quality Professionals should prepare for the future?

This article is originally published on

The initial spark of rebellion started in the 1990’s when professionals came to believe that not all software problems must be treated as an engineering one. People have started to get tired of the high-regulation of the late 1970’s engineering-like processes and the inherent frustration of micro-management and requirements obsoleteness. And it witnessed early attempts such as Rapid Application Development (RAD) and the Unified Process, which served as early drivers to what later came to be Agile Development whose manifesto was written in 2001.

However, it was over the past 12 years that the movement has been shaping and transforming the way we conduct software projects. In front of our eyes we stood watching the transformation yet did nothing in tailoring one of its essential challenges, testing. We watched the process solving the 1990’s problems yet watched it departing away from QA, leaving it no adequate place. We took our place within software projects for granted. And instead of adjusting our place, we raged in tantrums whenever we are bypassed and we increased our timidity in holding on to our old cherished activities which are basically at the root of the problem.

Why we need to prepare? Because the way software is developed and market needs have changed and become faster than how we do QA. There is both practice lag as well as a technological/technical one. Here are main software industry dynamic changes that begs for us to prepare:

1) Agility and Innovation Time-to-market are the main drivers

Time-to-market in the olden days was about making a piece of software available to users within an adequate timeframe relative to the competitive market. However, that generic definition is no longer applicable to software. It became now about when we can make “Innovation” available. Time-to-market definition has changed from mere availability to innovation availability due to two main factors: 1) As we started to produce software much more quickly and easily, thanks to the transformative technology of late 90’s and 2000’s, business owners have stretched their ambitions to innovative features; and 2) Users are no longer users of software, they becameConsumers of software, which means their needs won’t ever be static and they will be in continuous hunger for more features.

With this change of industrial and market dynamics, we are producing Software at higher speed and bigger size, thanks to software development technologies that made this possible. Alas, QA has fell behind in such race. The speed of development and quantity of production have way surpassed QA ability to catch up that it has become a bottleneck.

2) There is considerable gap between QA practice & Dev/Release

QA artifact is human thinking rather than tools, whether this artifact is test cases or clicks and key strokes. Thinking requires the input of information, past knowledge, and understanding. After that, there will be gaps in understanding and blind spots. This means asking questions or rethinking the inputs. The challenge of this thought process to QA is not it being a thinking activity; after all, every phase in the software development process is operated by human thought. However, the challenge is that QA is outsider to software. QA doesn’t create any of it: we didn’t conceive the thought itself, which is created by Product Owners, and we didn’t create the implementation of it, which is created by developers. Thus, we are always on the outside thinking about something separate from us; and this makes our thought process challenging in terms of time. At the end, QA ends up having knowledge more than developers due to this very same reason, but at the price of time that the market is no longer willing to pay.

Reality molds thinking

When mind is combined with the craft of hand, one’s thinking gets personified into a physical entity. Reality molds our thinking in return to our thinking that gave birth to this reality. It’s like our thinking and reality are in continuous feedback loop, each reinforces the other.

And this is what development technology advances have successfully sought to achieve: what I would call Eliminate and Elevate. It eliminated human thinking at all in some parts while elevated the level of human thinking in some parts. For instance, olden day developers had to think of very find details, like Exception Handling. Now, a developer is relieved from ensuring array out of boundary. This is a case where human thinking is eliminated altogether. On the other side, development technology has allowed developers to think in terms of constructs and models. If/Else statements, classes, tags, functions, data structures… are all examples of constructs that conveniently embodies human thinking. This is all unlike QA human thinking activity: we need to tackle all level of details small and big (no elimination) and we need to think in the open; like a philosopher in the garden looking into the sky trying to articulate possible combination of angles to a problem (no elevation). It’s all raw thinking to QA. And ironically enough, this is exactly what seems to separate QA from users and make their existence in companies valuable! When QA cuts corner on this tiresome thinking activity (to save time or being lazy), it loses its existential value in the company and becomes a mere user clicking and playing around, that we can hire anyone in the street to do for us – in fact, the model is there: Crowdsourcing Testing and Pay-by-Bug services.

All the above major factors have been challenging QA over the past years and with time, the gap between QA and software development & production has been widening more and more. QA is important and its existence in the field as a standalone activity till now though all these challenges is a sufficient proof of that importance. However, reality always finds a way and while industry is not seeking to remove it yet it is investing into finding better ways to achieve it. This is happening now; and it is now that you, QA, must start to prepare. How?

A) Become Domain Expert

To break the barrier between the QA and the software being tested, one needs to be a Domain Expert. Most QAs concern themselves by learning the software they are testing and becoming experts in it. However, what QA lacks is becoming experts in the entire domain they are working on, not just their immediate software. By being a domain expert, you bring in more value to testing, you contribute to the business, and you become much more quicker in generating test scenarios that matters the most – as a domain expert you have fingertip sense in knowing where to hit. And from the economical perspective, you are increasing your employability in the future if need be that you shift to product management or a new company that wants to hire your domain expertise. How you can achieve that? Suppose you are working on education and learning products/software:

a) You should invest some of your time researching and reading about other products in the same domain. What are their features, how they are functioning…etc.

b) Not only this, you also research technologies behind them that serve your domain; what are their strengths, limitations…etc.

c) And last but not least, dedicate like one weekend every month to download those products apps or register for an account and play around with them in an explorative testing manner. This exercise will sharpen your skills and enhance your knowledge as well as helping you seeing different types of bugs that will help you even come up with more scenarios you hadn’t articulate with your software.

B) Learn about Software Design

Recall our earlier discussion on users’ tolerance to non-essential bugs given they are provided error-free feature mission? When you start to think of software as Featuresrather than discreet functional specs and bugs, it will become clearer to you that the critical problem in nowadays software is “Poorly Designed” features. They deter users from using the product more than non-mission-critical bugs. In fact, most of mission-critical bugs of features are due to the poor design of it.

In this undertaking, you should partition your learning into two paths: 1) Usability and 2) Design Patterns. That is, you get the outside and the inside. You can follow a focused approach by learning first what your company or product is using. You can ask developers which architectural design pattern they are using and start with that. You can also read about usability of apps in your domain; and do your homework by thinking: a feature like X, what usability and design elements that most bring it forward? With such type of questions combined with your education in software design, you are not only better at testing your features, but an essential contributor to them.

C) Master the science of Measurement

Quality Metrics and Measurements is one of the most deserted branch of QA though its high importance. Without it in place, no one feels it, but try one time to send a real report that provides insightful measurements based on solid metrics and see how much attention that report will harness. Measurements are light. If you are able to measure, then, you are the person who can tell everybody else what’s really going on. The entire world is statistics fanatic. However, in your work as QA, you become the highlight by this.

To master this science, you first need to first define your metrics (I.e. What you are interested in measuring) and your objectives from measuring these metrics. Then, you need to spend more time gaining mastery over these metrics so that measurements out of them are reliable and accurate. You will rarely be able to do this independently; therefore, be ready to speak to other teams and gather some information. And don’t report to the public at first shot. It’s best that you spend time reporting internally, or yourself, first until you have mastered the metrics, found better ways for collection, and obtained repeatable results.

D) Learn Systems Thinking/Engineering Fundamentals

Plenty of modern time problems are System Thinking and Engineering problems. And plenty of problems we ourselves inject in our life are due to our failure to think on the system level. For instance, you are producing a very nice app that help people for example learn about traffic before they go. This app was very slow in crawling traffic information and displaying it to users. Your team after studying the problem realized that the problem is in the frontend layer and worked hard in slashing this slow operation time by reducing the number of function calls in the frontend as well as minimizing decision nodes in each function. You, the QA, has been tasked to test this improvement. You tested it functionally and non-functionally by doing performance tests and you cleared that the app version is ready to go. Next day after release, the entire team was called in that the backend system on server side that is supposed to register and dispatch the traffic information has failed to route traffic data at timely manner; that is, users started to get information about traffic condition of their route several minutes delayed, thus, losing much of their value. The team studied the problem and discovered that when they streamlined the front-end performance, they increased the number of requests between the front-end and the controller operating the backend. They never saw this in the past because they never had such smooth flow, but now the streamlined part has placed pressure on another remote part. This is a typical example of System Problem.

What is Systems Thinking/Engineering? In straightforward definition without jargon, it’s the thinking that your variables are rarely independent but rather Interdependent. When you change variable X, you also affect variable Y in some sort. Your mission as a systems thinker is to uncover the Y variable and the type of affective interdependent relationship with the known variable X you have. To train your self into being better at system thinking:

a) There are available Systems Thinking/Engineering tools you can borrow and use. For instance, I would recommend something called Quality Function Deployment (QFD) as a non-complicated and nice technique for you to start with. Using it, you can draw interconnection between different parts and attributes as well as weighing them.

b) Train yourself to go further steps away from the point you are interested in yet leading to it and leaving from it. If you are a computer science major gradate, then, you must know Automata Theory that we create to define syntax grammar. Automata are considered like a model for a machine; that is how it operates. Same to a feature. How it operates is not just a precise point yet an entire model around it with your feature being only one point in your model and has lines of connected dots before it and lines of connected dots after it. If you train yourself to think in terms of your feature Automata, that is the entire graph of points connected to it, you will uncover untraditional remote parts impacted by the feature at hand.

E) Master Test Automation

Test automation could be the first step in addressing the technological gap between Quality and Development as well as release/delivery processes. Software development is quicker software usage combinations are increasing, and market demands are pushing; this is while testing is purely a human activity. If we need to release new app version that fixes bugs or introduces something new every one or two weeks, we need to be able to run over everything and provide feedback in 1 or two hours. This is impossible with manual testing. The matter gets even more handy with Continuous Integration and Delivery processes (CI & CD). With each development Pull Request, subset of test automation corresponding to areas impacted by the pull request are automatically triggered. When it’s Green, the pull request gets automatically merged and if Red, automatically rolled back, and further investigation is done on the causes of failure.

If you want to master test automation effectively, I recommend that you first find state-of-the-art technologies and learn them. I have seen many people in attempting to do automation, they start by using plugins and code-generation tools or, if they are more serious, they jump into learning a given programming language they are told is used in automation. This is a wrong approach that will prolong your journey and make you less efficient. Plugins and code-generation tools could be useful if you want to test static software, but they disintegrate if you are working on a organismic product that enlarges with features every while. On the other side, test automation is a family of technologies and never a single thing. That is, developing and working with a real test automation framework is to work with a set of technologies combined together. Therefore, it’s important first to find out this family of technologies and follow an integrative approach in learning them. There are many languages and technologies out there; however, my recommendation always goes to Cucumber, Capybara, Ruby programming language as well as Selenium webdriver. From experience, I find them the most efficient in coming to play together and they are extremely supported by a wide variety of code libraries in virtually any programming task you can imagine. However, don’t reinvent the wheel. If you are already adept in PHP, find those technologies that revolve around them; same if you are more experienced in Java. They are also popular in the field of test automation and you will surely find supporting platforms for them.

As we stepped in the field of software, we should be always tireless in preparing and developing ourselves. In a rapid industry like software, it’s slow death to shelve yourself and fail to notice the trends, problems, and transformations. It’s also the nature of that field that you owe your own edge to yourself not the company. You handle the responsibility of seeing things and preparing to handle them.

Posted by Ahmed on 12 Sep 2017 in Team
3 Things I Learned Extracting a Service

3 Things I Learned Extracting a Service

A few months ago, foodora’s “Search and Discovery” team extracted an elasticsearch population service from a PHP monolith (I won’t call it a microservice because our population service is still a large and complex application).

Here are three bits of advice that I wish I’d heard before we started:

1: Mark deprecations as early as possible.

Even if your internal communications are fantastic, when several teams are contributing to the code-base of a monolith it’s easy to forget what everyone is working on.

Since creating the new service might take some time (and even if it just takes one iteration of your sprint-cycle) it’s worth noting ahead-of-time that extraction work is in progress. It doesn’t take much effort to deprecate the things you’ll replace, optionally with a note about the service that will replace it and who the person / team in charge is.

This helps in a number of ways:

  • It increases the chances that anyone who has to modify or make use of any concerned classes will be aware of the upcoming change.
  • Hopefully, anyone who needs to modify or make use of the deprecated classes will keep the team creating the service in the loop know about changes that might also need to be mirrored in the new service.
  • Having a pull-request with a set of deprecations provides an easy “watch-list” for logic changes that might occur during the service development process.
  • Marking deprecations can increase your understanding of the scope and complexity of the operations that you’re trying to extract.

2: Automation will only take you so far.

In a scripting language like PHP, recursive dependency mapping can be very complex, especially with the potential for “magic” method usage, dependency injection, direct object creation, Symfony-framework containers and interface constants.

Tracing required and non-required dependencies can be easily automated with an IDE or on the command line (e.g. grepping for classes referenced with PHP’s “use” keyword). I spent some time creating scripts to help with some of the more repetitive elements of dependency tracing. While this did speed up the process of removing unused files, in the end I found that there’s no real substitute for simply going through the code that your service will replace class-by-class and often method-by-method.

This was tedious at times, but it created a much better understanding of the service’s functionality. It was also a prime opportunity for some (test-driven) refactoring.

3: Make integration tests a priority.

We started thinking about making all existing integration tests pass quite early in our development process, but I had no idea about the complexity of this task.

Next time I extract a service I’m going to try to divide this task up into smaller tickets, each of which could have been focused on more specific sub-tasks such as:

  • Ensuring existing integration tests have correct coverage.
  • Creating an area-specific (e.g. city or country) master switch to toggle old and new functionality.
  • Packaging the new service in a Docker container, perhaps even in a preliminary, “mocked” state.
  • Creating an automated process to pull down a “Dockerized” container of the service before running tests.
  • Making sure that all output, especially from error conditions, are handled correctly by the existing application.

Overall, this process provided a lot of potential for optimising the interface between the monolith and the new service as well as finding the minimum system requirements for our service.

What’s next for “Search and Discovery”?

We’re currently working on a new Scala app that will wrap the search side of elasticsearch, with a focus on auto-generation of SDKs, controllers and documentation from swagger files using tools such as Zalando’s api-first-hand. Some members of our team are actively contributing to the api-first-hand project as well. I think that’s worthy of at least one blog post, coming soon…

Posted by Stuart on 30 Mar 2017 in Architecture, DevOps, Team
foodpanda joins foodora

foodpanda joins foodora

Following the recent acquisition of foodpanda by Delivery Hero ( since today the Technology Team of foodpanda has joined foodora. With combined forces the now much bigger team will support all foodpanda and foodora countries – working together towards our common goal of creating the worlds most customer friendly food ordering platform.

In the same step this blog got rebranded and moved from to

Posted by Mathias on 13 Feb 2017 in Team
Our Hong Kong team

Our Hong Kong team

Learn more about our corporate product and our team based in Hong Kong. This article covers our internal processes, development methodology and the technical setup.
Continue reading →

Posted by Agata on 25 Jan 2017 in Team
Merry Christmas!

Merry Christmas!

The whole foodpanda tech team wishes everybody wonderful holidays and a few relaxing days with their beloved ones! Continue reading →

Posted by Mathias on 24 Dec 2016 in Team