docker內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)的實(shí)現(xiàn)
1. 場(chǎng)景
使用windows, wsl2 進(jìn)行日常開(kāi)發(fā)測(cè)試工作。 但是wsl2經(jīng)常會(huì)遇到網(wǎng)絡(luò)問(wèn)題。比如今天在測(cè)試一個(gè)項(xiàng)目,核心功能是將postgres 的數(shù)據(jù)使用開(kāi)源組件synch 同步到clickhouse 這個(gè)工作。
測(cè)試所需組件
- postgres
- kafka
- zookeeper
- redis
- synch容器
最開(kāi)始測(cè)試時(shí),選擇的方案是, 將上述五個(gè)服務(wù)使用 docker-compose 進(jìn)行編排, network_modules使用hosts模式, 因?yàn)榭紤]到kafka的監(jiān)聽(tīng)安全機(jī)制,這種網(wǎng)絡(luò)模式,無(wú)需單獨(dú)指定暴露端口。
docker-compose.yaml 文件如下
version: "3" services: postgres: image: failymao/postgres:12.7 container_name: postgres restart: unless-stopped privileged: true # 設(shè)置docker-compose env 文件 command: [ "-c", "config_file=/var/lib/postgresql/postgresql.conf", "-c", "hba_file=/var/lib/postgresql/pg_hba.conf" ] volumes: - ./config/postgresql.conf:/var/lib/postgresql/postgresql.conf - ./config/pg_hba.conf:/var/lib/postgresql/pg_hba.conf environment: POSTGRES_PASSWORD: abc123 POSTGRES_USER: postgres POSTGRES_PORT: 15432 POSTGRES_HOST: 127.0.0.1 healthcheck: test: sh -c "sleep 5 && PGPASSWORD=abc123 psql -h 127.0.0.1 -U postgres -p 15432 -c '\q';" interval: 30s timeout: 10s retries: 3 network_mode: "host" zookeeper: image: failymao/zookeeper:1.4.0 container_name: zookeeper restart: always network_mode: "host" kafka: image: failymao/kafka:1.4.0 container_name: kafka restart: always depends_on: - zookeeper environment: KAFKA_ADVERTISED_HOST_NAME: kafka KAFKA_ZOOKEEPER_CONNECT: localhost:2181 KAFKA_LISTENERS: PLAINTEXT://127.0.0.1:9092 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092 KAFKA_BROKER_ID: 1 KAFKA_LOG_RETENTION_HOURS: 24 KAFKA_LOG_DIRS: /data/kafka-data #數(shù)據(jù)掛載 network_mode: "host" producer: depends_on: - redis - kafka - zookeeper image: long2ice/synch container_name: producer command: sh -c " sleep 30 && synch --alias pg2ch_test produce" volumes: - ./synch.yaml:/synch/synch.yaml network_mode: "host" # 一個(gè)消費(fèi)者消費(fèi)一個(gè)數(shù)據(jù)庫(kù) consumer: tty: true depends_on: - redis - kafka - zookeeper image: long2ice/synch container_name: consumer command: sh -c "sleep 30 && synch --alias pg2ch_test consume --schema pg2ch_test" volumes: - ./synch.yaml:/synch/synch.yaml network_mode: "host" redis: hostname: redis container_name: redis image: redis:latest volumes: - redis:/data network_mode: "host" volumes: redis: kafka: zookeeper:
測(cè)試過(guò)程中因?yàn)橐褂?postgres, wal2json組件,在容器里單獨(dú)安裝組件很麻煩, 嘗試了幾次均已失敗而告終,所以后來(lái)選擇了將 postgres 服務(wù)安裝在宿主機(jī)上, 容器里面的synch服務(wù) 使用宿主機(jī)的 ip,port端口。
但是當(dāng)重新啟動(dòng)服務(wù)后,synch服務(wù)一直啟動(dòng)不起來(lái), 日志顯示 postgres 無(wú)法連接. synch配置文件如下
core: debug: true # when set True, will display sql information. insert_num: 20000 # how many num to submit,recommend set 20000 when production insert_interval: 60 # how many seconds to submit,recommend set 60 when production # enable this will auto create database `synch` in ClickHouse and insert monitor data monitoring: true redis: host: redis port: 6379 db: 0 password: prefix: synch sentinel: false # enable redis sentinel sentinel_hosts: # redis sentinel hosts - 127.0.0.1:5000 sentinel_master: master queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs: - db_type: postgres alias: pg2ch_test broker_type: kafka # current support redis and kafka host: 127.0.0.1 port: 5433 user: postgres password: abc123 databases: - database: pg2ch_test auto_create: true tables: - table: pgbench_accounts auto_full_etl: true clickhouse_engine: CollapsingMergeTree sign_column: sign version_column: partition_by: settings: clickhouse: # shard hosts when cluster, will insert by random hosts: - 127.0.0.1:9000 user: default password: '' cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable. distributed_suffix: _all # distributed tables suffix, available in cluster kafka: servers: - 127.0.0.1:9092 topic_prefix: synch
這種情況很奇怪,首先確認(rèn) postgres, 啟動(dòng),且監(jiān)聽(tīng)端口(此處是5433) 也正常,使用localhost和主機(jī)eth0網(wǎng)卡地址均報(bào)錯(cuò)。
2. 解決
google 答案,參考 stackoverflow 高贊回答,問(wèn)題解決,原答案如下
If you are using Docker-for-mac or Docker-for-Windows 18.03+, just connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string).
If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker
container with the --add-host host.docker.internal:host-gateway option.
Otherwise, read below
Use** --network="host" **in your docker run command, then 127.0.0.1 in your docker container will point to your docker host.
更多詳情見(jiàn) 源貼
host 模式下 容器內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)
將postgres監(jiān)聽(tīng)地址修改如下 host.docker.internal 報(bào)錯(cuò)解決。 查看宿主機(jī) /etc/hosts 文件如下
root@failymao-NC:/mnt/d/pythonProject/pg_2_ch_demo# cat /etc/hosts # This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf: # [network] # generateHosts = false 127.0.0.1 localhost 10.111.130.24 host.docker.internal
可以看到,宿主機(jī) ip跟域名的映射. 通過(guò)訪問(wèn)域名,解析到宿主機(jī)ip, 訪問(wèn)宿主機(jī)服務(wù)。
最終啟動(dòng) synch 服務(wù)配置如下
core: debug: true # when set True, will display sql information. insert_num: 20000 # how many num to submit,recommend set 20000 when production insert_interval: 60 # how many seconds to submit,recommend set 60 when production # enable this will auto create database `synch` in ClickHouse and insert monitor data monitoring: true redis: host: redis port: 6379 db: 0 password: prefix: synch sentinel: false # enable redis sentinel sentinel_hosts: # redis sentinel hosts - 127.0.0.1:5000 sentinel_master: master queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs: - db_type: postgres alias: pg2ch_test broker_type: kafka # current support redis and kafka host: host.docker.internal port: 5433 user: postgres password: abc123 databases: - database: pg2ch_test auto_create: true tables: - table: pgbench_accounts auto_full_etl: true clickhouse_engine: CollapsingMergeTree sign_column: sign version_column: partition_by: settings: clickhouse: # shard hosts when cluster, will insert by random hosts: - 127.0.0.1:9000 user: default password: '' cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable. distributed_suffix: _all # distributed tables suffix, available in cluster kafka: servers: - 127.0.0.1:9092 topic_prefix: synch host: host.docker.internal core: debug: true # when set True, will display sql information. insert_num: 20000 # how many num to submit,recommend set 20000 when production insert_interval: 60 # how many seconds to submit,recommend set 60 when production # enable this will auto create database `synch` in ClickHouse and insert monitor data monitoring: true redis: host: redis port: 6379 db: 0 password: prefix: synch sentinel: false # enable redis sentinel sentinel_hosts: # redis sentinel hosts - 127.0.0.1:5000 sentinel_master: master queue_max_len: 200000 # stream max len, will delete redundant ones with FIFO source_dbs: - db_type: postgres alias: pg2ch_test broker_type: kafka # current support redis and kafka host: port: 5433 user: postgres password: abc123 databases: - database: pg2ch_test auto_create: true tables: - table: pgbench_accounts auto_full_etl: true clickhouse_engine: CollapsingMergeTree sign_column: sign version_column: partition_by: settings: clickhouse: # shard hosts when cluster, will insert by random hosts: - 127.0.0.1:9000 user: default password: '' cluster_name: # enable cluster mode when not empty, and hosts must be more than one if enable. distributed_suffix: _all # distributed tables suffix, available in cluster kafka: servers: - 127.0.0.1:9092 topic_prefix: synch
3. 總結(jié)
以--networks="host" 模式下啟動(dòng)容器時(shí),如果想在容器內(nèi)訪問(wèn)宿主機(jī)上的服務(wù), 將ip修改為`host.docker.internal`
4. 參考
到此這篇關(guān)于docker內(nèi)服務(wù)訪問(wèn)宿主機(jī)服務(wù)的實(shí)現(xiàn)的文章就介紹到這了,更多相關(guān)docker訪問(wèn)宿主機(jī)內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
- docker從容器中訪問(wèn)到宿主機(jī)3種方法
- docker內(nèi)的容器如何與宿主機(jī)共享IP的方法
- docker 實(shí)現(xiàn)容器與宿主機(jī)無(wú)縫調(diào)用shell命令
- Docker容器沒(méi)有權(quán)限寫入宿主機(jī)目錄的解決方案
- docker容器訪問(wèn)宿主機(jī)的MySQL操作
- docker容器中無(wú)法獲取宿主機(jī)hostname的解決方案
- docker容器無(wú)法訪問(wèn)宿主機(jī)端口的解決
- Docker容器訪問(wèn)宿主機(jī)網(wǎng)絡(luò)的方法
- docker容器與centos宿主機(jī)時(shí)間一致設(shè)置方法
相關(guān)文章
詳解Docker 下開(kāi)發(fā) hyperf 完整使用示例
這篇文章主要介紹了詳解Docker 下開(kāi)發(fā) hyperf 完整使用示例,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2020-01-01使用Docker創(chuàng)建FTP服務(wù)器的過(guò)程解析
這篇文章主要介紹了使用Docker創(chuàng)建FTP服務(wù)器的過(guò)程解析,使用?Docker?搭建?FTP?服務(wù),不僅十分簡(jiǎn)單,而且可以對(duì)宿主機(jī)有一定的隔離,對(duì)Docker創(chuàng)建FTP服務(wù)器的過(guò)程感興趣的朋友一起看看吧2022-04-04滾動(dòng) docker 中的 nginx 日志思路詳解
Nginx 自己沒(méi)有處理日志的滾動(dòng)問(wèn)題,本文筆者介紹如何滾動(dòng)運(yùn)行在 docker 中的 nginx 日志文件,感興趣的朋友一起看看吧2018-08-08在CentOS 7 上為docker配置端口轉(zhuǎn)發(fā)以兼容firewall的解決方法
這篇文章主要介紹了在CentOS 7 上為docker配置端口轉(zhuǎn)發(fā)以兼容firewall的解決方法,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2020-07-07Docker數(shù)據(jù)備份恢復(fù)實(shí)現(xiàn)過(guò)程詳解
這篇文章主要介紹了Docker數(shù)據(jù)備份恢復(fù)實(shí)現(xiàn)過(guò)程詳解,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-09-09