Simplifying local dev setup with Docker Compose

栏目: IT技术 · 发布时间: 4年前

内容简介:If you’ve ever had to deal with setting up a Node.js project in which you had to install a bunch of things – like MySQL/Postgres, Redis, etc. – and then run some setup scriptsthen you’ve likely experienced the pain of losing half a day – at least – to sole

If you’ve ever had to deal with setting up a Node.js project in which you had to install a bunch of things – like MySQL/Postgres, Redis, etc. – and then run some setup scripts just to be able to get the project running locally on your machine…

then you’ve likely experienced the pain of losing half a day – at least – to solely just getting set up.

This is especially frustrating and anxiety-inducing if you’re new to the team and want to start contributing right away, not waste time in the maze of steps you have to run, or waste time having to ask the team every 5 minutes how to get over the next install hurdle.

What’s worse is, as the project evolves, you might need to install more things, you might have more complex setup scripts, and (worst of all IMO) documentation for that setup might become out of date.

Rather than having to install a bunch of things – or figure out what you need to install in the first place, in case of bad documentation – there’s a much easier way that can get you up and running in as little as one or two commands.

Enter Docker Compose

Docker Compose gives us the ability to define install dependencies – like databases and other software – and run them within containers that your “main” code can interact with.

In order to best explain how to use Compose – and how to convert an existing project with local install steps, scripts, etc – I’ll use an example of a demo repo I wrote awhile back (that accompanied this post on designing reliable queues ).

When I originally built that project, it was using “the old way”, without Compose.

But I recently re-wrote it to use Compose for creating Redis and Postgres containers, and to be able to run the tests against those containers (using Compose is also really good for having local test databases).

New world and old world

First, let’s look at how the project was setup using “the old way”:

– first install Homebrew

– then install Postgres

– then create a “root” database

– then define the schema

– then run a script to install Redis

– then run a script to start Postgres

– then run a script to start Redis

That’s a lot of steps…

Now, let’s take a look at the steps involved using Docker Compose:

docker-compose up

…and that’s it.

How were we able to accomplish this?

Let’s look at how I converted this project over to using Compose.

Postgres

Instead of having to install Postgres (and Homebrew, if you didn’t already have it installed), and then define our database and schema , using Compose that becomes:

 
version: '3.7'
services:
  db_queue:
    image: postgres:9.6.17
    container_name: db_queue
    environment:
      POSTGRES_DB: library
      POSTGRES_USER: root
      POSTGRES_PASSWORD: password
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - db-data:/var/lib/postgresql/data
    ports:
      - 5432:5432
 
volumes:
  db-data:
 

Note that the above is contained in the docker-compose.yml file in the root of our project.

Second note: you’ll need to have Docker installed on your machine in order to use Docker and Docker Compose .

We define our “install dependencies” within the services section, in this case, Postgres.

Then we define the basic environment variables that Postgres needs to startup the database. In the old world, where we were creating the database from the command line via psql, here we just define it under POSTGRES_DB .

The service’s volumes section uses an initialize script (more on this in a second) and defines a database volume that gets “mounted” alongside the container. And we define that volume name using the “root” volumes section, in this case using the name db-data .

The reason we do that is so that if we bring down the “stack” using docker-compose down , it won’t clear the schema definitions + data stored in the database. Note, if we want to delete that information and bring it totally down, we can use the command docker-compose down -v , using the -v flag for “volume”.

The init.sql (used to create the table schema as the container boots up) still needs to be created, but instead of you having to manually define the schema, the SQL script just gets leveraged by Compose instead. In other words, its automatic rather than manual, and removes a step for us.

And here’s what that init.sql script looks like:

CREATE TABLE books (book_number int, isbn text)
 

Lastly, we map the container port to the host machine port (the host machine being your machine itself), so that you can access the container from your machine. That’s done in the service’s ports section.

Redis

For Redis, it’s even simpler. In that same services section, we do:

 
redis_queue:
  image: redis:5.0.6
  container_name: redis_queue
  ports:
    - 6379:6379
 

Define the Docker Redis image to use, give the container a name, and map the ports. Simple.

Compared to the old world, where we had to run a script to wget to install Redis and build that code using make , then start Redis using a separate script , the Compose way is much easier.

Leveraging the Compose containers

Real quick, here’s the entire docker-compose.yml file in its entirety:

 
version: '3.7'
services:
  redis_queue:
    image: redis:5.0.6
    container_name: redis_queue
    ports:
      - 6379:6379
  db_queue:
    image: postgres:9.6.17
    container_name: db_queue
    environment:
      POSTGRES_DB: library
      POSTGRES_USER: root
      POSTGRES_PASSWORD: password
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - db-data:/var/lib/postgresql/data
    ports:
      - 5432:5432
 
volumes:
  db-data:
 

Like I mentioned before, all we need to do to start the “stack” is to run docker-compose up , and Docker will use the Compose file and services defined therein to spin up the containers.

Because we have the container ports mapped to the local machine, we can run the unit/integration tests using npm test – nothing different we need to do.

You can also run the code against the containers, not just the tests. Simple.

Wrapping up

If you’re continuously bumping up against problems running your project locally, strongly consider using Docker Compose for this instead.

It makes defining a local “stack” for local development a lot simpler and more headache-free then installing a bunch of stuff on your machine. And in this post we’ve really only scratched the surface of what you can do. It can make your developer life SO much easier.

Knowing how to setup a project for easy local development is one hurdle… understanding how to structure your project is another. Want an Express REST API structure template that makes it clear where your logic should go? Sign up below to receive that template, plus a post explaining how that structure works / why it’s setup that way so you don’t have to waste time wondering where your code should go. You’ll also receive all my new posts directly to your inbox!


以上所述就是小编给大家介绍的《Simplifying local dev setup with Docker Compose》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

工程问题C++语言求解

工程问题C++语言求解

Delores M.Etter、Jeanine A.Ingber / 冯力、周凯 / 机械工业出版社 / 2014-8 / 79元

本书介绍了如何利用ANSIC++编程语言以基于对象的编程方式来解决工程问题。书中引用了大量来自于不同工程、科学和计算机科学领域的示例,是一本理论和实践结合紧密的教材。针对C++基本语法的各个部分,由浅入深地进行讲解。每讲解一部分基础知识,同时会结合多个相关实例,实例内容详实,紧贴所讲内容,使读者能够立刻对所学知识进行练习,实战性强。一起来看看 《工程问题C++语言求解》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

随机密码生成器
随机密码生成器

多种字符组合密码

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具