部署和运维
uwsgi VS Gunicorn
两个服务器的区别是: uwsgi 用C写,Gunicorn是纯Python实现,性能上比uwsgi慢,但安装和使用比uwsgi简单。
因为Docker能解决uwsgi的安装问题,所以选用uwsgi,gunicorn作为第二选择。
小技巧
重启命令: uwsgi --reload /tmp/project-master.pid
docker + docker-compose + uwsgi + jenkins
使用jenkins自动构建项目, 一个服务部署一个容器, docker-compose编排多个服务
为了减小django项目镜像体积,使用多阶段构建方案
1# syntax=docker/dockerfile:1
2FROM python:3.10.6-bullseye as Build
3
4# 声明维护人员 https://docs.docker.com/engine/reference/builder/#label
5LABEL maintainer="lzx" \
6 email="397132445@qq.com"
7
8RUN pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
9 pip3 config set install.trusted-host mirrors.aliyun.com
10
11RUN pip3 install --no-cache-dir --upgrade pip
12
13COPY server/requirements.txt requirements.txt
14
15RUN pip3 install -r requirements.txt
16
17FROM python:3.10.6-slim-bullseye
18
19# 设置中国时区 —— Docker实践(第2版) - 技巧25
20RUN rm -rf /etc/localtime
21RUN ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
22
23RUN echo "" > /etc/apt/sources.list && \
24 echo "deb https://mirrors.aliyun.com/debian stable main contrib non-free">>/etc/apt/sources.list && \
25 echo "deb https://mirrors.aliyun.com/debian stable-updates main contrib non-free">>/etc/apt/sources.list && \
26 apt-get clean && \
27 apt-get update && \
28 apt-get upgrade -y && \
29 apt-get install -y --no-install-recommends \
30 libxml2 \
31 ssh \
32 sshpass
33
34# libxml2 -> uwsgi, ssh -> fabric, sshpass -> ansible
35
36# SSH设置
37# 允许root用户登录
38RUN echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
39# 允许密码验证
40RUN echo "PasswordAuthentication yes" >> /etc/ssh/sshd_config
41# 修改密码
42RUN echo "root:123456" | chpasswd
43# 重启服务ssh服务: /etc/init.d/ssh restart
44
45RUN pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
46 pip3 config set install.trusted-host mirrors.aliyun.com
47
48COPY --from=Build /usr/local/lib/python3.10/site-packages /usr/local/lib/python3.10/site-packages
49COPY --from=Build /usr/local/bin/inv /usr/local/bin/inv
50COPY --from=Build /usr/local/bin/uwsgi /usr/local/bin/uwsgi
51COPY --from=Build /usr/local/bin/pytest /usr/local/bin/pytest
52COPY --from=Build /usr/local/bin/invoke /usr/local/bin/invoke
53COPY --from=Build /usr/local/bin/scrapy /usr/local/bin/scrapy
54COPY --from=Build /usr/local/bin/celery /usr/local/bin/celery
55COPY --from=Build /usr/local/bin/ipython /usr/local/bin/ipython
56COPY --from=Build /usr/local/bin/sphinx-build /usr/local/bin/sphinx-build
57COPY --from=Build /usr/local/bin/django-admin /usr/local/bin/django-admin
58
59WORKDIR /code
60
61COPY server ./server
62
63RUN mkdir /code/temp
64RUN mkdir "/var/log/uwsgi"
65
66WORKDIR server
67
68EXPOSE 9090
69
70# ENTRYPOINT [ "uwsgi", "--ini", "uwsgi.ini"]
71
72# flower启动命令:
73# docker run -p 5555:5555 -e CELERY_BROKER_URL=redis://redis:6379/0 --net py-blog_default --link redis:redis mher/flower
bullseye是大而全的环境, 不适合发布到生产环境(1G+的体积), 主要是为了成功安装uwsgi。 alpine是最小的环境, 而uwsgi在运行时仍然需要调用一些动态链接文件, alpine是缺失很多这种文件的, 所以最终选定在体积在中间位置的slim-bullseye作为发布到生产环境的基础镜像。 多阶段构建镜像的瘦身效果是从1G+的体积大小减少至500多M。
两个容器用的同一份项目代码,因此使用同一个镜像(pyblog)。
docker + supervisor + fabric + gunicorn
https://docs.docker.com/config/containers/multi-service_container/
只需要写一个Dockerfile, git下载代码和supervisor启动都在容器中完成。使用fabric将Dockerfile上传至服务器和自动构建。 在docker容器使用supervisor管理多个服务进程。
一个容器部署一个服务利于项目维护和功能拓展, 但提高部署复杂度。而使用supervisor将所有服务放在一个容器的好处是简单便捷和工作量更少, 如果项目不大或者想省时省力,可以采用这个方案。
参见
我的私人项目myproject
core (在容器中使用git下载项目代码)
Docker实践(第2版) 734页
Example
Dockerfile
1# syntax=docker/dockerfile:1
2# 定义基础镜像 https://docs.docker.com/engine/reference/builder/#from
3FROM python:3.8.13-slim-buster
4
5# 设置中国时区 —— Docker实践(第2版) - 技巧25
6RUN rm -rf /etc/localtime
7RUN ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
8
9# 声明维护人员 https://docs.docker.com/engine/reference/builder/#label
10LABEL maintainer="luzhenxiong" \
11 email="397132445@qq.com"
12
13# 定义参数, run build命令接口可以传入 https://docs.docker.com/engine/reference/builder/#arg
14ARG BASE_DIR=/opt/code/
15
16# 执行shell命令 https://docs.docker.com/engine/reference/builder/#run
17# 每条Dockerfile命令在执行时都会在之前的镜像基础上创建一个新的层, 在RUN语句使用&&合并多条命令可以保持镜像不至于太大 -- Docker实践(第2版)->第116页
18RUN sed -i 's/deb.debian.org/mirrors.tuna.tsinghua.edu.cn/' /etc/apt/sources.list && \
19 sed -i 's/security.debian.org/mirrors.tuna.tsinghua.edu.cn/' /etc/apt/sources.list && \
20 sed -i 's/security-cdn.debian.org/mirrors.tuna.tsinghua.edu.cn/' /etc/apt/sources.list && \
21 apt-get clean && \
22 apt-get update && \
23 apt-get upgrade -y && \
24 apt-get install -y --no-install-recommends \
25 curl \
26 vim \
27 git \
28 ssh \
29 redis \
30 nginx
31
32# 拷贝文件/文件夹 https://docs.docker.com/engine/reference/builder/#copy
33# 将宿主的ssh凭证拷贝到容器中, 让git可以克隆私人仓库的代码
34COPY ./.ssh/ /root/.ssh/
35
36RUN chmod 0600 /root/.ssh/id_ed25519
37
38# 指定工作目录 https://docs.docker.com/engine/reference/builder/#workdir
39WORKDIR $BASE_DIR
40
41RUN pip3 config set global.index-url https://mirrors.aliyun.com/pypi/simple && \
42 pip3 config set install.trusted-host mirrors.aliyun.com && \
43 pip3 install ipython
44
45# 运行时变量 https://docs.docker.com/engine/reference/builder/#arg
46# 按需清除缓存: build时传入参数CACHEBUST=${RANDOM}或者CACHEBUST=$(date +%s), 此一行之后的命令不使用缓存
47# 注: bash命令${RANDOM}是获取随机数, $(date +%s)是获取时间戳
48# 跟 ``--no-cache`` 相比,这种方法能精细化利用缓存
49ARG CACHEBUST=no
50
51 # 克隆项目代码
52RUN git clone git@gitee.com:luzhenxiong/myproject.git $BASE_DIR && \
53 pip3 install -r mydjango/requirements.txt && \
54 touch /tmp/supervisor.sock && \
55 cp -rf $BASE_DIR/deployment/Supervisor/supervisord.d /etc/supervisord.d && \
56 # 生成supervisor配置文件
57 echo_supervisord_conf > /etc/supervisord.conf && \
58 # http://supervisord.org/configuration.html#include-section-settings
59 echo "[include]" >> /etc/supervisord.conf && \
60 echo "files = supervisord.d/*.ini" >> /etc/supervisord.conf && \
61 echo "" >> /etc/supervisord.conf && \
62 # 在前台运行supervisord http://supervisord.org/configuration.html#supervisord-section-settings
63 echo "[supervisord]" >> /etc/supervisord.conf && \
64 echo "nodaemon=true" >> /etc/supervisord.conf && \
65 # 复制nginx配置文件
66 cp $BASE_DIR/deployment/nginx/nginx.conf /etc/nginx/nginx.conf && \
67 # 收集静态资源
68 python mydjango/manage.py collectstatic
69
70# 指定从所构建的镜像启动的容器需要监听这个端口 https://docs.docker.com/engine/reference/builder/#expose
71EXPOSE 80
72
73# 指定启动时需要执行的命令 https://docs.docker.com/engine/reference/builder/#cmd
74CMD supervisord -c /etc/supervisord.conf
75
76# 构建镜像
77# docker build -t my --build-arg=CACHEBUST=${RANDOM} .
78
79# 启动容器
80# docker run -itd -p 80:80 --name myproject my
supervisor
1[program:gunicorn]
2command=gunicorn project_django.wsgi -c gunicorn_config.py
3process_name=%(program_name)s ; process_name expr (default %(program_name)s)
4numprocs=1 ; number of processes copies to start (def 1)
5umask=022 ; umask for process (default None)
6priority=999 ; the relative start priority (default 999)
7autostart=true ; start at supervisord start (default: true)
8directory=/opt/myproject/mydjango
9autorestart=true ; retstart at unexpected quit (default: true)
10startsecs=10 ; number of secs prog must stay running (def. 1)
11startretries=3 ; max # of serial start failures (default 3)
12exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
13stopsignal=QUIT ; signal used to kill process (default TERM)
14stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
15user=root ; setuid to this UNIX account to run the program
16stderr_logfile=/var/log/mydjango-err.log ; stderr log path, NONE for none; default AUTO
17stdout_logfile=/var/log/mydjango-out.log ; stdout log path, NONE for none; default AUTO
18
19[program:celeryWorker]
20command=celery -A project_django worker --loglevel=info
21process_name=%(program_name)s ; process_name expr (default %(program_name)s)
22numprocs=1 ; number of processes copies to start (def 1)
23umask=022 ; umask for process (default None)
24priority=999 ; the relative start priority (default 999)
25autostart=true ; start at supervisord start (default: true)
26directory=/opt/myproject/mydjango
27autorestart=true ; retstart at unexpected quit (default: true)
28startsecs=10 ; number of secs prog must stay running (def. 1)
29startretries=3 ; max # of serial start failures (default 3)
30exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
31stopsignal=QUIT ; signal used to kill process (default TERM)
32stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
33user=root ; setuid to this UNIX account to run the program
34stderr_logfile=/var/log/celery-worker-err.log ; stderr log path, NONE for none; default AUTO
35stdout_logfile=/var/log/celery-worker-out.log ; stdout log path, NONE for none; default AUTO
36
37[program:celeryBeat]
38command=celery -A project_django beat -l info --scheduler django_celery_beat.schedulers:DatabaseScheduler
39process_name=%(program_name)s ; process_name expr (default %(program_name)s)
40numprocs=1 ; number of processes copies to start (def 1)
41umask=022 ; umask for process (default None)
42priority=999 ; the relative start priority (default 999)
43autostart=true ; start at supervisord start (default: true)
44directory=/opt/myproject/mydjango
45autorestart=true ; retstart at unexpected quit (default: true)
46startsecs=10 ; number of secs prog must stay running (def. 1)
47startretries=3 ; max # of serial start failures (default 3)
48exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
49stopsignal=QUIT ; signal used to kill process (default TERM)
50stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
51user=root ; setuid to this UNIX account to run the program
52stderr_logfile=/var/log/celery-beat-err.log ; stderr log path, NONE for none; default AUTO
53stdout_logfile=/var/log/celery-beat-out.log ; stdout log path, NONE for none; default AUTO
54
55[program:redis]
56command=/usr/bin/redis-server /etc/redis/redis.conf --daemonize no
57process_name=%(program_name)s ; process_name expr (default %(program_name)s)
58numprocs=1 ; number of processes copies to start (def 1)
59umask=022 ; umask for process (default None)
60priority=999 ; the relative start priority (default 999)
61autostart=true ; start at supervisord start (default: true)
62autorestart=true ; retstart at unexpected quit (default: true)
63startsecs=10 ; number of secs prog must stay running (def. 1)
64startretries=3 ; max # of serial start failures (default 3)
65exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
66stopsignal=QUIT ; signal used to kill process (default TERM)
67stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
68user=root ; setuid to this UNIX account to run the program
69stderr_logfile=/var/log/redis-err.log ; stderr log path, NONE for none; default AUTO
70stdout_logfile=/var/log/redis-out.log ; stdout log path, NONE for none; default AUTO
71
72[program:nginx]
73command=nginx -c /etc/nginx/nginx.conf -g 'daemon off;' ; 启动命令
74user=root ; 启动命令
75autostart=true ;在 启动 5 秒后没有异常退出,就当作已经正常启动了
76autorestart=true ;程序异常退出后自动重启
77stopasgroup=true ;
78killasgroup=true ;
79stdout_logfile=/var/log/nginx-out.log ;
80stderr_logfile=/var/log/nginx-err.log
托管Redis时需要注意一点: Redis自带daemon功能,跟supervisor产生冲突,因此command命令增加参数 --daemonize no
。托管Nginx同理。
部署python程序到离线环境下
比如想在离线环境安装requests库
在线环境执行:
pip download requests
生成出3个whl文件
requests-2.28.1-py3-none-any.whl
urllib3-1.26.12-py2.py3-none-any.whl
certifi-2022.9.24-py3-none-any.whl
idna-3.4-py3-none-any.whl
charset_normalizer-2.1.1-py3-none-any.whl
拷贝到离线环境, 执行
pip install requests-2.28.1-py3-none-any.whl urllib3-1.26.12-py2.py3-none-any.whl certifi-2022.9.24-py3-none-any.whl certifi-2022.9.24-py3-none-any.whl idna-3.4-py3-none-any.whl charset_normalizer-2.1.1-py3-none-any.whl
部署docker应用到离线环境下
在线linux环境制作好镜像, 执行
docker save {image-id} > image.tar
警告
不推荐在win环境导出镜像, 非常慢
拷贝到装有docker的离线环境机器
docker load -i image.tar