INFRASTRUCTURE AUTOMATIONBack Case Studies
Client: Online Sports Betting
Length: 2 months
Goal: Automate infrastructure creation and instance configuration with all the tools, frameworks and applications in the stack
Tech: Terraform, Ansible, Vagrant, Docker, AWS
We created a recommender engine for a client’s online sports betting service. This recommender had to support both standard betting (pre-match betting offer) as well as live betting (realtime betting offer). We also had to provide an automated provisioning (deployment, configuration and operations) solution for the entire tech stack (all the tools needed for the recommender to work).
The deployment target for the recommender engine was an on-premise environment with virtual servers, but in order to start fast, we used the AWS infrastructure for the development. We also wanted to have a local environment for the quick development cycle. We used Vagrant so developers could feel more comfortable while working on automation scripts. We needed a good tool for abstracting the target infrastructure where switching between different profiles could be done effortlessly. We also needed a good configuration and orchestration tool to manage an ensemble of various components in the target tech stack (Kafka, Spark, Cassandra, Airflow, Python and Scala applications).
We decided to go with Terraform for infrastructure providing since it provides a good and easily understandable infrastructure abstraction and ease of use. Ansible was selected as the provision (configure and orchestrate) tool. Ansible has a solid support for Docker which we planned to use. Moreover, Ansible scripts (playbooks and roles) can be seen as infrastructure documentation which is really important for anyone who should maintain the project. Separated installation documentation usually gets depreciated quickly so using the code as the actual documentation is a big benefit.
We created a fully automated infrastructure, using Ansible, which installed a lot of components needed for our solution to work properly. We have a rule not to connect to machines and do stuff manually which helps a lot when reasoning about problems and it also prevents human errors. All our applications are dockerized. This makes maintenance more flexible. Developed Ansible scripts provide operations upon the Docker containers that encapsulated our application components (e.g. stopping and starting services, updating service versions through Docker image updates, etc.). Docker plays a significant role in the context of resource management and a good foundation for transition towards automatic resource management platforms. This was an additional reason why we selected Docker, as it is our plan to start using Kubernetes and Mesos for this and similar projects.