Quick Find 置顶!

System
RockyLinux9 root 用户通过 SSH
1允许Rocky Linux 9 root用户通过ssh登录方法:
2
31. 编辑SSH配置文件
4
5vi /etc/ssh/sshd_config 按键盘i 进行编辑
6
7找到以下内容
8
9#PermitRootLogin prohibit-password
10
11将其修改为
12
13PermitRootLogin yes
14
15按键盘Esc 键 :wq 保存退出
16
17重启SSH服务
18
19systemctl restart sshd
20
21此时root用户可以通过ssh远程登录
修改无 GUI 了:把默认启动改为命令行
在系统里执行:
1## 1) 设置默认启动到命令行
2sudo systemctl set-default multi-user.target
3
4## 2) 立刻切换到命令行(不用重启)
5sudo systemctl isolate multi-user.target
6
7## 3) 可选:禁用图形登录管理器(更彻底)
8sudo systemctl disable --now gdm
9
10## 4) 想恢复图形界面(反向操作)
11sudo systemctl set-default graphical.target
12sudo systemctl enable --now gdm
PowerShell 设置 Clash 代理
1###
2notepad $PROFILE
3
4$env:HTTP_PROXY = "http://127.0.0.1:7899"
5$env:HTTPS_PROXY = "http://127.0.0.1:7899"
6$env:NO_PROXY = "localhost,127.0.0.1"
RockyLinux9 安装 docker 镜像不兼容(Selinux)
https://www.sujx.net/2023/07/10/RockyLinux-Container/index.html
Firewalld
1# 启动
2systemctl start firewalld
3
4# 查看状态
5systemctl status firewalld
6
7# 禁用,禁止开机启动
8systemctl disable firewalld
9
10# 停止运行
11systemctl stop firewalld
SyncTime
1# 安装ntp服务
2yum install ntp
3
4# 开机启动服务
5systemctl enable ntpd
6
7# 启动服务
8systemctl start ntpd
9
10# 更改时区
11timedatectl set-timezone Asia/Shanghai
12
13# 启用ntp同步
14timedatectl set-ntp yes
15
16# 同步时间
17ntpq -p
18
19
20
21### crontab
22[root@master tmp]# vi /tmp/synctime.sh
23#!/bin/bash
24systemctl restart ntpd
25timedatectl set-timezone Asia/Shanghai
26timedatectl set-ntp yes
27ntpq -p
28
29[root@master tmp]# crontab -e
30* * * * * /tmp/synctime.sh
Partition

socks5 Agent
1[root@DevOps ~]# vim /etc/profile
2[root@DevOps ~]# source /etc/profile
3export ALL_PROXY="socks5://192.168.10.88:10808"
4export https_proxy="http://192.168.10.88:10809"
5export http_proxy="http://192.168.10.88:10809"
containerd 配 systemd 代理
- 配置 containerd 代理 drop-in(在 k8s-node01 执行)
1mkdir -p /etc/systemd/system/containerd.service.d
2
3cat >/etc/systemd/system/containerd.service.d/http-proxy.conf <<'EOF'
4[Service]
5Environment="HTTP_PROXY=http://192.168.10.88:7899"
6Environment="HTTPS_PROXY=http://192.168.10.88:7899"
7Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.cluster.local,.svc,.svc.cluster.local,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02"
8EOF
- 重载并重启 containerd
1systemctl daemon-reload
2systemctl restart containerd
- 验证 containerd 已拿到代理环境
1systemctl show containerd --property=Environment
你应该能看到包含 HTTP_PROXY/HTTPS_PROXY/NO_PROXY。
- 触发一次镜像拉取验证
任选其一:
- 用 crictl(推荐)
1crictl pull nginx:latest
- 或让 Pod 重建
1kubectl -n default delete pod <你的nginx-pod名>
2kubectl -n default describe pod <新的pod名> | egrep -i "pull|image|error"
k3s 服务也配代理
如果你用的是 k3s,自带 containerd
有些环境里真正管理 containerd 的是 k3s.service(或者 kubelet 由其它 unit 管理),为了避免“containerd 配了但实际由 k3s 拉/管理”这种差异,建议也做一份(不冲突):
- 给 k3s 配代理(如果存在 k3s 服务)
1systemctl status k3s 2>/dev/null | head
如果存在 k3s:
1mkdir -p /etc/systemd/system/k3s.service.d
2
3cat >/etc/systemd/system/k3s.service.d/http-proxy.conf <<'EOF'
4[Service]
5Environment="HTTP_PROXY=http://192.168.10.88:7899"
6Environment="HTTPS_PROXY=http://192.168.10.88:7899"
7Environment="NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16,.cluster.local,.svc,.svc.cluster.local,k8s-master01,k8s-master02,k8s-master03,k8s-node01,k8s-node02"
8EOF
9
10systemctl daemon-reload
11systemctl restart k3s
注意点(避免你后面再次踩坑)
NO_PROXY必须包含:- Pod/Service 网段(你这里已有
10.0.0.0/8,172.16.0.0/12,192.168.0.0/16等) - 集群域名(你这里已有
.cluster.local,.svc,...)
- Pod/Service 网段(你这里已有
- 代理地址要从 node01 可达:你已经验证可用
- 如果你还有
ImagePullBackOff:- 大概率是 只改了 shell 环境,没改 systemd unit(我们这次就是修这个)
我需要你回传的验证结果(两条即可)
执行后把输出贴我:
1systemctl show containerd --property=Environment
1crictl pull nginx:latest
状态总结
- 进展:根因已确认(node01 直连 docker.io 不通 + containerd 未配置代理)。
- 下一步:按“方案 A”配置 systemd proxy 并验证
crictl pull成功后,这个ImagePullBackOff问题就可以判定解决。
containerd 删除本地镜像
1crictl rmi nginx:latest
proxy-on
1### 加入文件末尾
2vi ~/.bashrc
3
4
5# ============================================
6# K8s Master 节点代理配置
7# ============================================
8export HTTP_PROXY=http://192.168.10.88:7899
9export HTTPS_PROXY=http://192.168.10.88:7899
10
11# NO_PROXY 配置(完整版)
12export NO_PROXY="localhost,127.0.0.1,\
1310.0.0.0/8,10.96.0.0/12,10.96.0.10,\
14172.16.0.0/12,172.16.0.0/16,\
15192.168.0.0/16,\
16169.254.0.0/16,\
17.cluster.local,.svc,.svc.cluster.local,\
18kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,\
19k8s-master01,k8s-node01,k8s-node02,\
20192.168.10.231,192.168.10.232,192.168.10.233"
21
22# 兼容性设置(小写)
23export http_proxy=$HTTP_PROXY
24export https_proxy=$HTTPS_PROXY
25export no_proxy=$NO_PROXY
26
27# 便捷命令(修复版)
28alias proxy-status='echo "HTTP_PROXY=$HTTP_PROXY"; echo "NO_PROXY=$NO_PROXY"'
29alias proxy-test='curl -I https://www.google.com 2>&1 | head -5 && kubectl get nodes'
30alias proxy-off='unset HTTP_PROXY HTTPS_PROXY http_proxy https_proxy NO_PROXY no_proxy && echo "✅ Proxy disabled"'
31alias proxy-on='source ~/.bashrc && echo "✅ Proxy enabled"'
32alias curl-k8s='curl --noproxy "*"'
33
34# 验证配置
35echo "✅ Proxy configured: $HTTP_PROXY"
36echo "✅ NO_PROXY includes: $(echo $NO_PROXY | cut -d',' -f1-5)..."
37
38
39### 立即生效
40source ~/.bashrc
Yum 2 Aliyun
1### 备份yum源文件
2mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo_bak
3
4### 新建yum仓库配置文件
5vi /etc/yum.repos.d/CentOS-Base.repo
6# CentOS-Base.repo
7#
8# The mirror system uses the connecting IP address of the client and the
9# update status of each mirror to pick mirrors that are updated to and
10# geographically close to the client. You should use this for CentOS updates
11# unless you are manually picking other mirrors.
12#
13# If the mirrorlist= does not work for you, as a fall back you can try the
14# remarked out baseurl= line instead.
15#
16#
17
18[base]
19name=CentOS-$releasever - Base - mirrors.aliyun.com
20failovermethod=priority
21baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/os/$basearch/
22gpgcheck=1
23gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
24
25#released updates
26[updates]
27name=CentOS-$releasever - Updates - mirrors.aliyun.com
28failovermethod=priority
29baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/updates/$basearch/
30gpgcheck=1
31gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
32
33#additional packages that may be useful
34[extras]
35name=CentOS-$releasever - Extras - mirrors.aliyun.com
36failovermethod=priority
37baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/extras/$basearch/
38gpgcheck=1
39gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
40
41#additional packages that extend functionality of existing packages
42[centosplus]
43name=CentOS-$releasever - Plus - mirrors.aliyun.com
44failovermethod=priority
45baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/centosplus/$basearch/
46gpgcheck=1
47enabled=0
48gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
49
50#contrib - packages by Centos Users
51[contrib]
52name=CentOS-$releasever - Contrib - mirrors.aliyun.com
53failovermethod=priority
54baseurl=https://mirrors.aliyun.com/centos-vault/7.9.2009/contrib/$basearch/
55gpgcheck=1
56enabled=0
57gpgkey=https://mirrors.aliyun.com/centos-vault/RPM-GPG-KEY-CentOS-7
58
59### 清除缓存并重建元数据缓存
60yum clean all && yum makecache
kubernetes yum 源
1# 添加 Kubernetes 的 yum 源
2cat <<EOF > /etc/yum.repos.d/kubernetes.repo
3[kubernetes]
4name=Kubernetes
5baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
6enabled=1
7gpgcheck=1
8repo_gpgcheck=1
9gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
10EOF
11
12# 更新 yum 缓存
13yum clean all
14yum makecache
15
16# 查看版本
17yum list kubeadm --showduplicates | grep 1.22
18
19# 安装特定版本的 kubeadm
20yum install -y kubeadm-1.22.17-0 kubelet-1.22.17-0 kubectl-1.22.17-0
21
22# 如果需要降级,使用 downgrade 命令
23yum downgrade -y kubeadm-1.22.17-0 kubelet-1.22.17-0 kubectl-1.22.17-0
24
25# 查看版本
26kubeadm version
NFS Provisioner 给 kubeadm 多节点集群做持久化
用一台机器当 NFS Server,K8s 里装 NFS Subdir External Provisioner 来动态创建 PV,然后把它设成默认 StorageClass
先确认 3 个信息(你选一个节点当 NFS Server)
- [NFS Server IP]:例如
192.168.10.231(建议用 master 或单独一台稳定机器) - [导出目录]:例如
/srv/nfs/k8s - [客户端节点]:你的 3 台节点都要能访问这个 IP(231/232/233)
用 NFS 动态供给给 Loki PVC 提供持久化卷,
1### 在 NFS Server(RockyLinux)上安装并配置 NFS
2# 安装与启动(在k8s-master01上执行)
3sudo dnf -y install nfs-utils
4sudo systemctl enable --now nfs-server
5
6### 创建导出目录
7sudo mkdir -p /srv/nfs/k8s
8sudo chmod 777 /srv/nfs/k8s
9
10### 配置 /etc/exports
11echo "/srv/nfs/k8s 192.168.10.0/24(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
12sudo exportfs -rav
13
14### 放行防火墙(如果你开了 firewalld)
15sudo firewall-cmd --permanent --add-service=nfs
16sudo firewall-cmd --permanent --add-service=mountd
17sudo firewall-cmd --permanent --add-service=rpc-bind
18sudo firewall-cmd --reload
19
20### 在所有 K8s 节点(master + workers)安装 NFS 客户端
21每台节点都执行
22sudo dnf -y install nfs-utils
23
24### 验证节点能挂载 NFS(强烈建议做一次)
25在任意一个节点上测试(把 IP 换成你的 NFS Server)
26sudo mkdir -p /mnt/testnfs
27sudo mount -t nfs <NFS_SERVER_IP>:/srv/nfs/k8s /mnt/testnfs
28df -h | grep testnfs
29sudo umount /mnt/testnfs
30
31### 在 Kubernetes 里安装 NFS 动态供给器 + StorageClass
32# 推荐用官方维护的:nfs-subdir-external-provisioner
33# 安装(需要你把 NFS Server IP/路径替换掉)
34# storageClass.defaultClass=true:让它成为默认 SC(你 Loki PVC 没写 SC 也能自动用它)
35# reclaimPolicy=Delete:PVC 删除时会删子目录(更省事;如果你想保留数据改成 Retain)
36
37kubectl create ns nfs-provisioner
38
39helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
40
41helm repo update
42
43helm upgrade --install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
44 -n nfs-provisioner \
45 --set nfs.server=192.168.10.231 \
46 --set nfs.path=/srv/nfs/k8s \
47 --set storageClass.name=nfs-client \
48 --set storageClass.defaultClass=true \
49 --set storageClass.reclaimPolicy=Delete
50
51### 检查
52# 检查 sc 里有 nfs-client,并标记为 (default)
53[root@k8s-master01 ~]# kubectl get sc
54NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
55nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 9m21s
56
57# provisioner Pod 状态 Running
58[root@k8s-master01 ~]# kubectl -n nfs-provisioner get pods -o wide
59NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
60nfs-subdir-external-provisioner-65f6486bd6-ch5zq 1/1 Running 0 9m59s 172.16.85.212 k8s-node01 <none> <none>
Maven
https://blog.csdn.net/aaxzsuj/article/details/130524829
SpringBoot
启动类
1package net.xdclass;
2
3import org.mybatis.spring.annotation.MapperScan;
4import org.springframework.boot.SpringApplication;
5import org.springframework.boot.autoconfigure.SpringBootApplication;
6
7@SpringBootApplication
8@MapperScan("net.xdclass.mapper")
9public class UserApplication {
10 public static void main(String[] args) {
11 SpringApplication.run(UserApplication.class,
application.yml
1server:
2 port: 9001
3
4spring:
5 application:
6 name: xdclass-user-service
7
8 # 数据库配置
9 datasource:
10 driver-class-name: com.mysql.cj.jdbc.Driver
11 url: jdbc:mysql://192.168.10.21:3307/xdclass_user?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
12 username: root
13 password: abc1024.pub
14
15# 配置plus打印sql日志
16mybatis-plus:
17 configuration:
18 log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
19
20# 设置日志级别,ERROR/WARN/INFO/DEBUG,默认是INFO以上才显示
21logging:
22 level:
23 root: INFO
Docker
Install
1#安装并运行Docker。
2yum install docker-io -y
3systemctl start docker
4
5#检查安装结果。
6docker info
7
8#启动使用Docker
9systemctl start docker #运行Docker守护进程
10systemctl stop docker #停止Docker守护进程
11systemctl restart docker #重启Docker守护进程
12
13
14#修改镜像仓库
15sudo mkdir -p /etc/docker
16sudo tee /etc/docker/daemon.json <<-'EOF'
17{
18 "registry-mirrors": [
19 "https://docker.m.daocloud.io",
20 "https://noohub.ru",
21 "https://huecker.io",
22 "https://dockerhub.timeweb.cloud",
23 "https://proxy.1panel.live",
24 "https://docker.1panel.top",
25 "https://docker.1ms.run",
26 "https://docker.ketches.cn",
27 "https://05f073ad3c0010ea0f4bc00b7105ec20.mirror.swr.myhuaweicloud.com",
28 "https://mirror.ccs.tencentyun.com",
29 "https://0dj0t5fb.mirror.aliyuncs.com",
30 "https://docker.mirrors.ustc.edu.cn",
31 "https://6kx4zyno.mirror.aliyuncs.com",
32 "https://registry.docker-cn.com",
33 "https://akchsmlh.mirror.aliyuncs.com",
34 "https://hub-mirror.c.163.com",
35 "https://docker.hpcloud.cloud",
36 "https://docker.unsee.tech",
37 "http://mirrors.ustc.edu.cn",
38 "https://docker.chenby.cn",
39 "http://mirror.azure.cn",
40 "https://dockerpull.org",
41 "https://dockerhub.icu",
42 "https://hub.rat.dev"
43 ]
44}
45EOF
46sudo systemctl daemon-reload
47sudo systemctl restart docker
48
49#查看信息
50docker info
daemon.json
https://patzer0.com/archives/configure-docker-registry-mirrors-with-mirrors-available-in-cn-mainland
1[root@Flink ~]# cat /etc/docker/daemon.json
2{
3 "registry-mirrors": ["https://dockerpull.com"]
4}
5
6sudo systemctl daemon-reload
7sudo systemctl restart docker
升级
升级:https://blog.csdn.net/u011990675/article/details/141320931
升级后启动以前容器遇到的问题:Error response from daemon: unknown or invalid runtime name: docker-runc
解决问题:https://blog.csdn.net/weixin_40918145/article/details/133855258
MongoDB
配置文件
1net:
2 port: 27017
3 bindIp: "0.0.0.0"
4
5storage:
6 dbPath: "/data/db"
7
8security:
9 authorization: enabled
命令
1docker run -it -d --name mongo \
2-p 27017:27017 \
3--net mynet \
4--ip 172.18.0.8 \
5-v /root/mongo:/etc/mongo \
6-v /root/mongo/data/db:/data/db \
7-m 400m --privileged=true \
8-e MONGO_INITDB_ROOT_USERNAME=admin \
9-e MONGO_INITDB_ROOT_PASSWORD=abc123456 \
10-e TZ=Asia/Shanghai \
11docker.io/mongo --config /etc/mongo/mongod.conf
Redis
配置文件
1bind 0.0.0.0
2protected-mode yes
3port 6379
4tcp-backlog 511
5timeout 0
6tcp-keepalive 0
7loglevel notice
8logfile ""
9databases 12
10save 900 1
11save 300 10
12save 60 10000
13stop-writes-on-bgsave-error yes
14rdbcompression yes
15rdbchecksum yes
16dbfilename dump.rdb
17dir ./
18requirepass abc123456
命令
1docker run -it -d --name redis -m 200m \
2-p 6379:6379 --privileged=true \
3--net mynet --ip 172.18.0.9 \
4-v /root/redis/conf:/usr/local/etc/redis \
5-e TZ=Asia/Shanghai redis:6.0.10 \
6redis-server /usr/local/etc/redis/redis.conf
RabbitMQ
命令
1docker run -it -d --name mq \
2--net mynet --ip 172.18.0.11 \
3-p 5672:5672 -m 500m \
4-e TZ=Asia/Shanghai --privileged=true \
5rabbitmq
Minio
我们打开浏览器,访问 http://127.0.0.1:9001/login,然后填写好登陆信息,就能进入 Web 管理画面。 root abc123
目录
1docker load < Minio.tar.gz
2mkdir /root/minio
3mkdir /root/minio/data
4chmod -R 777 /root/minio/data
命令
1docker run -it -d --name minio \
2-p 9000:9000 -p 9001:9001 \
3-v /root/minio/data:/data \
4-e TZ=Asia/Shanghai --privileged=true \
5--env MINIO_ROOT_USER="root" \
6--env MINIO_ROOT_PASSWORD="abc123456" \
7-e MINIO_SKIP_CLIENT="yes" \
8bitnami/minio:latest
9
10
11### 最新版
12docker run -it -d --name minio -m 400m \
13-p 9000:9000 -p 9001:9001 \
14-v /data/minio/data:/data \
15-e TZ=Asia/Shanghai --privileged=true \
16--env MINIO_ROOT_USER="root" \
17--env MINIO_ROOT_PASSWORD="abc123456" \
18bitnami/minio:latest
19
20http://192.168.10.21:9001/login
Nacos
http://localhost:8848/nacos/
nacos
nacos
1docker run -it -d -p 8848:8848 --env MODE=standalone \
2--net mynet --ip 172.18.0.12 -e TZ=Asia/Shanghai \
3--name nacos nacos/nacos-server
4
5### new
6docker run -d \
7-e NACOS_AUTH_ENABLE=true \
8-e MODE=standalone \
9-e JVM_XMS=128m \
10-e JVM_XMX=128m \
11-e JVM_XMN=128m \
12-p 8848:8848 \
13-e SPRING_DATASOURCE_PLATFORM=mysql \
14-e MYSQL_SERVICE_HOST=192.168.10.58 \
15-e MYSQL_SERVICE_PORT=3306 \
16-e MYSQL_SERVICE_USER=root \
17-e MYSQL_SERVICE_PASSWORD=abc1024.pub \
18-e MYSQL_SERVICE_DB_NAME=nacos_config \
19-e MYSQL_SERVICE_DB_PARAM='characterEncoding=utf8&connectTimeout=10000&socketTimeout=30000&autoReconnect=true&useSSL=false' \
20--restart=always \
21--privileged=true \
22-v /home/data/nacos/logs:/home/nacos/logs \
23--name xdclass_nacos_auth \
24nacos/nacos-server:2.0.2
Sentinel
打开浏览器访问 http://localhost:8858/#/login,然后填写登陆帐户,用户名和密码都是 sentinel
1docker run -it -d --name sentinel \
2-p 8719:8719 -p 8858:8858 \
3--net mynet --ip 172.18.0.13 \
4-e TZ=Asia/Shanghai -m 600m \
5bladex/sentinel-dashboard
MySQL
1docker run \
2 -p 3306:3306 \
3 -e MYSQL_ROOT_PASSWORD=123456 \
4 -v /home/data/mysql/conf:/etc/mysql/conf.d \
5 -v /home/data/mysql/data:/var/lib/mysql:rw \
6 -v /home/data/mysql/my.cnf:/etc/mysql/my.cnf \
7 --name mysql \
8 --restart=always \
9 -d mysql:8.0.22
10
11docker run \
12 -p 3306:3306 \
13 -e MYSQL_ROOT_PASSWORD=123456 \
14 --name mysql \
15 -d mysql:8.0.22
webssh
1docker run -d --name webssh -p 5032:5032 --restart always lihaixin/webssh2:ssh
Mybatis-plus-generator
依赖
1<dependency>
2 <groupId>com.baomidou</groupId>
3 <artifactId>mybatis-plus-generator</artifactId>
4 <version>3.4.1</version>
5 </dependency>
6 <!-- velocity -->
7 <dependency>
8 <groupId>org.apache.velocity</groupId>
9 <artifactId>velocity-engine-core</artifactId>
10 <version>2.0</version>
11 </dependency>
12 <!-- 代码自动生成依赖 end-->
代码(标记 TODO 的记得修改)
1package net.xdclass.db;
2
3import com.baomidou.mybatisplus.annotation.DbType;
4import com.baomidou.mybatisplus.annotation.IdType;
5import com.baomidou.mybatisplus.generator.AutoGenerator;
6import com.baomidou.mybatisplus.generator.config.DataSourceConfig;
7import com.baomidou.mybatisplus.generator.config.GlobalConfig;
8import com.baomidou.mybatisplus.generator.config.PackageConfig;
9import com.baomidou.mybatisplus.generator.config.StrategyConfig;
10import com.baomidou.mybatisplus.generator.config.rules.DateType;
11import com.baomidou.mybatisplus.generator.config.rules.NamingStrategy;
12
13public class MyBatisPlusGenerator {
14
15 public static void main(String[] args) {
16 //1. 全局配置
17 GlobalConfig config = new GlobalConfig();
18 // 是否支持AR模式
19 config.setActiveRecord(true)
20 // 作者
21 .setAuthor("soulboy")
22 // 生成路径,最好使用绝对路径,window路径是不一样的
23 //TODO TODO TODO TODO
24 .setOutputDir("C:\\Users\\chao1\\Desktop\\demo\\src\\main\\java")
25 // 文件覆盖
26 .setFileOverride(true)
27 // 主键策略
28 .setIdType(IdType.AUTO)
29
30 .setDateType(DateType.ONLY_DATE)
31 // 设置生成的service接口的名字的首字母是否为I,默认Service是以I开头的
32 .setServiceName("%sService")
33
34 //实体类结尾名称
35 .setEntityName("%sDO")
36
37 //生成基本的resultMap
38 .setBaseResultMap(true)
39
40 //不使用AR模式
41 .setActiveRecord(false)
42
43 //生成基本的SQL片段
44 .setBaseColumnList(true);
45
46 //2. 数据源配置
47 DataSourceConfig dsConfig = new DataSourceConfig();
48 // 设置数据库类型
49 dsConfig.setDbType(DbType.MYSQL)
50 .setDriverName("com.mysql.cj.jdbc.Driver")
51 //TODO TODO TODO TODO
52 .setUrl("jdbc:mysql://192.168.10.21:3307/xdclass_user?useSSL=false")
53 .setUsername("root")
54 .setPassword("abc1024.pub");
55
56 //3. 策略配置globalConfiguration中
57 StrategyConfig stConfig = new StrategyConfig();
58
59 //全局大写命名
60 stConfig.setCapitalMode(true)
61 // 数据库表映射到实体的命名策略
62 .setNaming(NamingStrategy.underline_to_camel)
63
64 //使用lombok
65 .setEntityLombokModel(true)
66
67 //使用restcontroller注解
68 .setRestControllerStyle(true)
69
70 // 生成的表, 支持多表一起生成,以数组形式填写
71 //TODO TODO TODO TODO
72 .setInclude("user","address");
73
74 //4. 包名策略配置
75 PackageConfig pkConfig = new PackageConfig();
76 pkConfig.setParent("net.xdclass")
77 .setMapper("mapper")
78 .setService("service")
79 .setController("controller")
80 .setEntity("model")
81 .setXml("mapper");
82
83 //5. 整合配置
84 AutoGenerator ag = new AutoGenerator();
85 ag.setGlobalConfig(config)
86 .setDataSource(dsConfig)
87 .setStrategy(stConfig)
88 .setPackageInfo(pkConfig);
89
90 //6. 执行操作
91 ag.execute();
92 System.out.println("======= Done 相关代码生成完毕 ========");
93 }
94}
SwaggerConfiguration
依赖
1<!--swagger ui接口文档依赖-->
2 <dependency>
3 <groupId>io.springfox</groupId>
4 <artifactId>springfox-boot-starter</artifactId>
5 <version>3.0.0</version>
6 </dependency>
SwaggerConfiguration
1package net.xdclass.config;
2
3import lombok.Data;
4import org.springframework.context.annotation.Bean;
5import org.springframework.http.HttpMethod;
6import org.springframework.stereotype.Component;
7import springfox.documentation.builders.*;
8import springfox.documentation.oas.annotations.EnableOpenApi;
9import springfox.documentation.schema.ScalarType;
10import springfox.documentation.service.*;
11import springfox.documentation.spi.DocumentationType;
12import springfox.documentation.spring.web.plugins.Docket;
13
14import java.util.ArrayList;
15import java.util.List;
16
17@Component
18@EnableOpenApi
19@Data
20public class SwaggerConfiguration {
21
22 /**
23 * 对C端用户的接口文档
24 *
25 * @return
26 */
27 @Bean
28 public Docket webApiDoc() {
29
30 return new Docket(DocumentationType.OAS_30)
31 .groupName("用户端接口文档")
32 .pathMapping("/")
33 // 定义是否开启swagger,false为关闭,可以通过变量控制,线上关闭
34 .enable(true)
35 //配置api文档元信息
36 .apiInfo(apiInfo())
37 // 选择哪些接口作为swagger的doc发布
38 .select()
39 .apis(RequestHandlerSelectors.basePackage("net.xdclass"))
40 //正则匹配请求路径,并分配至当前分组
41 .paths(PathSelectors.ant("/api/**"))
42 .build()
43 //新版swagger3.0配置
44 .globalRequestParameters(getGlobalRequestParameters())
45 .globalResponses(HttpMethod.GET, getGlobalResponseMessage())
46 .globalResponses(HttpMethod.POST, getGlobalResponseMessage());
47 }
48
49
50 /**
51 * 生成全局通用参数, 支持配置多个响应参数
52 * 可以携带 token 信息
53 * @return
54 */
55 private List<RequestParameter> getGlobalRequestParameters() {
56 List<RequestParameter> parameters = new ArrayList<>();
57 parameters.add(new RequestParameterBuilder()
58 .name("token")
59 .description("登录令牌")
60 .in(ParameterType.HEADER)
61 .query(q -> q.model(m -> m.scalarModel(ScalarType.STRING)))
62 .required(false)
63 .build());
64
65// parameters.add(new RequestParameterBuilder()
66// .name("version")
67// .description("版本号")
68// .required(true)
69// .in(ParameterType.HEADER)
70// .query(q -> q.model(m -> m.scalarModel(ScalarType.STRING)))
71// .required(false)
72// .build());
73
74 return parameters;
75 }
76
77 /**
78 * 生成通用响应信息
79 *
80 * @return
81 */
82 private List<Response> getGlobalResponseMessage() {
83 List<Response> responseList = new ArrayList<>();
84 responseList.add(new ResponseBuilder().code("4xx").description("请求错误,根据code和msg检查").build());
85 return responseList;
86 }
87
88 /**
89 * api文档元信息
90 * @return
91 */
92 private ApiInfo apiInfo() {
93 return new ApiInfoBuilder()
94 .title("1024电商平台")
95 .description("微服务接口文档")
96 .contact(new Contact("soulboy", "abc1024.pub", "410686931@qq.com"))
97 .version("v1.0")
98 .build();
99 }
100}
AddressController
1package net.xdclass.controller;
2
3import io.swagger.annotations.Api;
4import io.swagger.annotations.ApiOperation;
5import io.swagger.annotations.ApiParam;
6import net.xdclass.service.AddressService;
7import org.springframework.beans.factory.annotation.Autowired;
8import org.springframework.web.bind.annotation.GetMapping;
9import org.springframework.web.bind.annotation.PathVariable;
10import org.springframework.web.bind.annotation.RequestMapping;
11
12import org.springframework.web.bind.annotation.RestController;
13/**
14 * <p>
15 * 电商-公司收发货地址表 前端控制器
16 * </p>
17 *
18 * @author soulboy
19 * @since 2023-10-21
20 */
21@Api(tags = "收货地址接口")
22@RestController
23@RequestMapping("/api/address/v1")
24public class AddressController {
25 @Autowired
26 AddressService addressService;
27 @ApiOperation("根据id查找地址详情")
28 @GetMapping("find/{address_id}")
29 public Object detail(@ApiParam(value = "地址id",required = true)
30 @PathVariable("address_id") long addressId){
31 return addressService.detail(addressId);
32 }
33}
访问地址
1http://192.168.10.88:9001/swagger-ui/index.html#/
Git
1git add ./*
2
3git commit -m "init2"
4
5git push -u origin "master"
Hyper-v
1### 关闭
2bcdedit /set hypervisorlaunchtype off
3
4### 开启
5bcdedit /set hypervisorlaunchtype auto
Docker 打包 Maven 插件配置
1### 聚合工程pom添加全局变量
2 <docker.image.prefix>xdclass-cloud</docker.image.prefix>
3
4### 每个微服务都添加依赖(服务名记得修改)
5 <build>
6 <finalName>alibaba-cloud-user</finalName>
7
8 <plugins>
9 <plugin>
10 <groupId>org.springframework.boot</groupId>
11 <artifactId>spring-boot-maven-plugin</artifactId>
12
13 <!--需要加这个,不然打包镜像找不到启动文件-->
14 <executions>
15 <execution>
16 <goals>
17 <goal>repackage</goal>
18 </goals>
19 </execution>
20 </executions>
21
22 <configuration>
23 <fork>true</fork>
24 <addResources>true</addResources>
25
26 </configuration>
27 </plugin>
28
29 <plugin>
30 <groupId>com.spotify</groupId>
31 <artifactId>dockerfile-maven-plugin</artifactId>
32 <version>1.4.10</version>
33 <configuration>
34
35 <repository>${docker.image.prefix}/${project.artifactId}</repository>
36
37 <buildArgs>
38 <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
39 </buildArgs>
40 </configuration>
41 </plugin>
42
43 </plugins>
44
45 </build>
Dockerfile
1### Dockerfile文件内容
2#FROM adoptopenjdk/openjdk11:ubi
3FROM adoptopenjdk/openjdk11:jre11u-nightly
4VOLUME /tmp
5ARG JAR_FILE
6COPY ${JAR_FILE} app.jar
7ENTRYPOINT ["java","-jar","/app.jar"]
8
9### mvn打包命令()
10# 步骤一:最外层 mvn clean install
11mvn clean install
12
13# 步骤二:去到子模块pom文件下
14mvn install -Dmaven.test.skip=true dockerfile:build
front-end
cnpm
1npm install -g cnpm --registry=https://registry.npmmirror.com
JDK8
1[root@ecs-8yZb5 ~]# mkdir -pv /usr/local/software
2[root@MiWiFi-R3P-srv ~]# cd /usr/local/software/
3[root@MiWiFi-R3P-srv software]# tar -zxvf jdk-8u181-linux-x64.tar.gz
4[root@MiWiFi-R3P-srv software]# mv jdk1.8.0_181 jdk8
5[root@MiWiFi-R3P-srv software]# vim /etc/profile
6JAVA_HOME=/usr/local/software/jdk8
7CLASSPATH=$JAVA_HOME/lib/
8PATH=$PATH:$JAVA_HOME/bin
9export PATH JAVA_HOME CLASSPATH
10[root@MiWiFi-R3P-srv software]# source /etc/profile
11[root@MiWiFi-R3P-srv software]# java -version
12java version "1.8.0_181"
13Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
14Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
System
DiskMount
1### 查看当前挂载点的设备名字和UUID
2[root@iZbp19if2e8jvz5vlw7343Z ~]# blkid
3/dev/vda1: UUID="c8b5b2da-5565-4dc1-b002-2a8b07573e22" TYPE="ext4"
4/dev/vdb1: UUID="9669d5ae-04db-4502-9a2a-6ec1d312ff3e" TYPE="ext4" PARTLABEL="Linux" PARTUUID="39c4d643-e085-4290-9ef5-f27cf7ee1ef8"
5
6### 查看系统所有挂载点
7[root@iZbp19if2e8jvz5vlw7343Z ~]# df -h
8Filesystem Size Used Avail Use% Mounted on
9devtmpfs 16G 0 16G 0% /dev
10tmpfs 16G 16K 16G 1% /dev/shm
11tmpfs 16G 672K 16G 1% /run
12tmpfs 16G 0 16G 0% /sys/fs/cgroup
13/dev/vda1 99G 41G 54G 44% /
14tmpfs 3.1G 0 3.1G 0% /run/user/1002
15/dev/vdb1 148G 73G 69G 52% /mysqlbin
16tmpfs 3.1G 0 3.1G 0% /run/user/0
17
18### 开机自动挂载
19[root@iZbp19if2e8jvz5vlw7343Z ~]# cat /etc/fstab
20UUID=c8b5b2da-5565-4dc1-b002-2a8b07573e22 / ext4 defaults 1 1
21/dev/vdb1 /mysqlbin ext4 defaults 0 2
rsyslogd(日志管理服务)
rsyslogd 是一个在 CentOS 7 中非常重要的系统日志守护进程。
rsyslogd 是 syslog 的升级版本,提供了更多功能和更好的性能,是现代 Linux 系统中标准的日志管理服务。
主要功能:
- 系统日志处理
- 收集各种系统日志和应用程序日志
- 处理本地生成的日志信息
- 可以接收来自远程系统的日志
- 日志分类和存储
- 根据不同的设施(facility)和严重级别(severity)对日志进行分类
- 默认将日志存储在 /var/log 目录下
- 可以将不同类型的日志存储到不同的文件中
- 主要特点:
- 高性能设计
- 模块化架构
- 支持 TCP/UDP 协议
- 支持日志过滤
- 支持日志转发功能
常见配置文件:
- 主配置文件:/etc/rsyslog.conf
- 附加配置目录:/etc/rsyslog.d/
1# 查看 rsyslogd 状态
2systemctl status rsyslog
3
4# 重启服务
5systemctl restart rsyslog
6
7# 启动服务
8systemctl start rsyslog
9
10# 停止服务
11systemctl stop rsyslog
/etc/rsyslog.conf 中直接控制日志大小,推荐的方式是使用 logrotate 来管理日志大小,因为它提供了更完整的日志轮转解决方案。步骤如下:
1### 确保已安装
2logrotate --version
3
4### 配置文件:/etc/logrotate.d/rsyslog
5/var/log/messages {
6 daily # 每天轮转一次
7 rotate 7 # 保留最近7个归档(7天)
8 dateext # 使用日期作为后缀
9 dateformat -%Y%m%d # 日期格式
10 compress # 压缩旧日志
11 missingok # 如果日志丢失,不报错
12 notifempty # 空文件不轮转
13 create 0600 root root # 新建日志文件的权限和所有者
14 postrotate
15 /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
16 endscript
17}
18
19### 删除注释的配置文件:/etc/logrotate.d/rsyslog
20/var/log/messages {
21 su root root
22 daily
23 rotate 7
24 dateext
25 dateformat -%Y%m%d
26 compress
27 missingok
28 notifempty
29 create 0600 root root
30 postrotate
31 /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
32 endscript
33}
34
35### 测试配置是否正确
36logrotate -d /etc/logrotate.d/rsyslog
37
38
39### 强制立即执行轮转
40logrotate -f /etc/logrotate.d/rsyslog
如果要对多个日志文件应用相同的规则:
1/var/log/messages /var/log/secure /var/log/maillog /var/log/cron {
2 su root root
3 daily
4 rotate 7
5 dateext
6 dateformat -%Y%m%d
7 compress
8 missingok
9 notifempty
10 create 0600 root root
11 postrotate
12 /bin/kill -HUP $(cat /var/run/syslogd.pid 2>/dev/null) 2>/dev/null || true
13 endscript
14}
轮转后的文件名示例:
1messages-20240301.gz
2messages-20240302.gz
3messages-20240303.gz
4...
5messages-20240307.gz
6messages # 当前日志文件
任务计划:每天凌晨1分执行logrotate
11 0 * * * /usr/sbin/logrotate /etc/logrotate.d/rsyslog