Image for post
Image for post

This is going to be the part 3 of the Microservices with ECS implementation. In part 1, we covered how to implement microservice architecture using Amazon ECS and its service discovery feature and used Rolling Update Strategy. In part 2, we covered how to use AWS App Mesh and block/allow service to service communication and do a canary or blue/green Deployment without any impact to the services. In this part, we are going to cover how to publish service metrics to AWS X-Ray service. …


Image for post
Image for post
Monolith to Microservices

This is going to be the part 2 of the Microservices with ECS implementation. In part 1, we covered how to implement microservice architecture using Amazon ECS and its service discovery feature and used Rolling Update Strategy. In this tutorial, we are going to see how we can use AWS App Mesh and block/allow service to service communication and do a canary or blue/green Deployment without any impact to the services. Please read the Part 1 to get better continuity.

Quick Intro to AWS App Mesh

AWS App Mesh is a service mesh that makes it easy to monitor and control services. App Mesh standardizes how your services communicate, giving you end-to-end visibility and helping to ensure high availability for your applications. App Mesh gives you consistent visibility and network traffic controls for every service in an application. …


Image for post
Image for post

Container-based microservice architectures have changed the way development and operations teams test and deploy modern application/services. Containers help companies modernize by making it easier to scale and deploy applications, but containers have also introduced new challenges and more complexity by creating an entirely new infrastructure ecosystem. AWS ECS is the container management service which we are going to discuss today.

A quick into about ECS

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast container management service that makes it easy to run, stop, and manage containers on a cluster. Your containers are defined in a task definition that you use to run individual tasks or tasks within a service. In this context, a service is a configuration that enables you to run and maintain a specified number of tasks simultaneously in a cluster. You can run your tasks and services on a serverless infrastructure that is managed by AWS Fargate. …


Image for post
Image for post

PKI (Public Key Infrastructure)

PKI is a framework that is used to encrypt the public keys and include their affiliated crypto-mechanisms. Now a days, many organizations rely on PKI to manage security through encryption. Mostly the encryption used through PKI is asymmetric encryption, which involves two keys public and private key, where anyone can use the public key to encrypt the data and whoever owns the private key can decrypt the encrypted data. We are using PKI to verify devices, websites, services etc…

Common examples of PKI today are SSL Certificates on websites. SSL Certificates ensures that the visitors are actually accessing the intended website and not the malicious website. We can use PKI for public websites and also for our internal website/internal services. In this blog, we will focus on how we manage our PKI for internal websites/services like AWS Load balancers, S3 endpoints, API Gateways…


Image for post
Image for post
AWS TGW

AWS — Transit Gateway (TGW)

Transit Gateway is a service provided by AWS inorder to connect multiple VPC/on-premise server to a single gateway. As of now transit gateway is a region specific service.

Points specific to TGW:

  1. Attachments: Connection between a VPC/VPN and a Transit gateway (TGW).
  2. Association: Association is used to identify which route table to be called when a traffic comes for a particular VPC CIDR.
  3. Propogation: Propogation is used to propogate the routes in route table.
  4. Route Tables: Consists of routes (where to look for the next hop).

The example scenario here I took has 5 VPC. For simplicity I have named the vpc, subnets and other related components in a easier way. You may need to name then appropriately when you create the infra for prod set up. …


Image for post
Image for post
CI/CD

Container-based microservices architectures have changed the way development and operations teams test and deploy modern application/services. Containers help companies modernize by making it easier to scale and deploy applications, but containers have also introduced new challenges and more complexity by creating an entirely new infrastructure ecosystem.

Large and small software companies are now deploying thousands of container instances daily, and that’s a complexity of scale they have to manage. So how do they do it?

The magic is kubernetes.

Originally developed by Google, Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.

In this tutorial, let’s see how we can include kubernetes in our existing traditional CI/CD pipelines and achieve high availability of our services and get the ability to do the code changes in production at anytime(yes, anytime without affecting the services). …


Image for post
Image for post
Discover your services

What is a docker swarm?

A Docker Swarm is a group of either physical or virtual machines that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you’re used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.

Docker swarm uses overlay network. The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container. …


Image for post
Image for post

Ansible Dynamic Inventory with AWS EC2 Plugin

With rapidly scaling cloud environment, it’s difficult to maintain couple of things due to scaling operations being done automatically based on load and other parameters. You might have seen your autoscaling launched few more EC2 instances and when you use Ansible (static inventory) you might miss those new instances. So here we are going to focus mainly on how to use Ansible to create a dynamic inventory using AWS EC2 plugin. I believe you used Ansible for your daily operations and have some knowledge on Ansible.

Ansible dynamic inventory plugin is supported from Ansible 2.7 version. Make sure you use Ansible 2.7+ …


RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, Message Queuing Telemetry Transport, and other protocols. It is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements. …


Introduction

Downtime is unavoidable in IT infrastructure and there are so many factors which can cause downtime to our applications that may be due to some physical risks such as natural disasters or some technology failure or even a security breach. But we can make our applications highly available if we design our system highly available and highly reliable. Here in this blog, let’s see how we can make our application highly available with a sample illustration.

What is High Availability?

Availability is used to describe the period of time when a service is available, as well as the time required by a system to respond to a request made by a user. High availability is a quality of a system or component that assures a high level of operational performance for a given period of time. Such as the system will be up for 99% or 99.9% or 99.999% of the time. …

About

Deep

AWS Cloud Engineer, Bangalore

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store