![]() ![]() Step1 - Create the project and the micro-services in it.Ĭreate the project and the repositories with the right structure: This example is of a WordPress website with react and a NodeJS with GraphQL API near it to work with the react components. With CI we can push the microservices as images to some container registry and fetch it on the remote environments. In this example, the remote environments will use docker-compose and the code will be fetched by sub-modeling the integration repository. env docker-compose up -dĪll deployment related assets such as k8s files will be stored on another repository since they are not related to the code itself nor to the local environment. To set up the local environment git clone -recursive link-to-integration-repo cp. Can also use virtual docker volume instead. It can also work with a custom Dockerfile from the utils directory. docker-compoe.yml - will represent all local services where each service will work with a cloned submodule code.We are storing the Dockerfiles and configurations there. Utils - all the files that are required in order to set up the local environment like Dockerfiles and configs.As an example there can be 3 submodules WordPress, NodeJs, and React. Submodules (ms1, ms2, ms3) - all microservices as submodules.Besides representing the code we are using this repository as our local repository setup. The local repository will be docker-compose based and will use the submodules code.Microservices commit represents the version.Microservices are being referenced using git-submodule for each repo.This repo will be cloned in order to get the code and set up the local environment in order to develop with its files and dockers. It represents each code from each microservice in a specific version and also the development environment. This is actually the project code repository - The final picture of the code. Separation - we wanted to have the option to reuse a microservice between several projects That means keeping the same structure overall projects as much as possible.Įasy setup - Setting up the local environment should be easy. Easy to jump between projects, easy to set up new projects, and many more advantages when there’s an expected structure. When having a lot of projects with different technologies - one concept focuses on all developers on the same methodology of work. One Main repository - one place to work with which gives everything in a clear way.Ĭonsolidation – is first. We’ll use it on the integration repository. Git submodule is a bit tricky to work with but there are ways working with as can be seen hereīesides fetching the code, each submodule is represented with a specific commit which indicates its version. In order to fetch all the repositories, we will use git submodules. To handle these challenges we decided to base our solution on git submodules and docker-compose to describe and run the local environment The GS3D Pattern - Git Submodules and Dockers Driven Development DevOps - Where to store production assets like dockers/deployment scripts which are different from local development dockers. ![]() Where should we store it? How can we share it with others? Sharing the local environment setup - Working with dockers creates many assets that are not related to the code itself Dockerfile, configuration files, docker-compose, scripts, and probably more.Versioning - What will represent the final code version if there are now many repositories?.Microservices Integration - how to manage the relations between all microservices?.Note: We work with Gitlab but the concept can be transitioned to other repository managers in a similar way Challenges I will describe how to manage a microservices-based project with dockers in git and will cover the git repository designing, development, and delivery requirements. What I will show is the result of what we use in all of our projects with dockers and came after many projects setups and improvement iterations. So I decided to put a lot of energy to create some method that will handle most of our cases (if not all of them). Today, In Linnovate, we have many projects of different types and I saw this question (and others) in my mind again and again. How should I handle code from many repositories? As a developer I want all the team around me to look and work on the same code. That was my first question.Īnother thing was when I was working with several repositories. Since I started playing with dockers several years ago and tried to develop with it on all levels I’ve faced many things like how to reuse and share the dockers of my local environment to prevent setups of 1–2 days again and again. If you consider moving your development to use dockers or you are looking to find how to do it in a harmonic way both for development and delivery to production this is for you. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |