docker-compose.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
services:
registry:
image: registry:2.8.3
container_name: registry
restart: always
environment:
- TZ=Asia/Shanghai
volumes:
- "${COMPOSE_DATA_DIR:-/data}/registry:/var/lib/registry"
ports:
- "5000:5000"

registry-ui:
image: joxit/docker-registry-ui:2.5.7
container_name: registry-ui
restart: always
depends_on:
- registry
environment:
- TZ=Asia/Shanghai
- SINGLE_REGISTRY=true
- SHOW_CONTENT_DIGEST=true
- PULL_URL=http://hub.starudream.local
- NGINX_PROXY_PASS_URL=http://registry:5000
ports:
#- "80:80"
- "5001:80"

registry-ui 默认附带一个 nginx,反向代理了 registry,如果没有其他网关可以设置为 80 端口直接使用。

nginx 反代配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
http {
server {
listen 8080;

location /v2/ {
proxy_pass http://10.252.25.215:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

location /registry/ {
proxy_pass http://10.252.25.215:5001/;
}
}
}

准备工作

  • fdisk -l
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
root@dev215 [~]# fdisk -l
GPT PMBR size mismatch (209715199 != 419430399) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/sda: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 406F0824-6F10-4C6F-8522-FFF0A017AA44

Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 209713151 207611904 99G Linux LVM


Disk /dev/mapper/opencloudos-root: 66.52 GiB, 71424802816 bytes, 139501568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/opencloudos-home: 32.48 GiB, 34871443456 bytes, 68108288 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel typedos 表示为 MBR 分区,为 gpt 表示为 GPT 分区。

  • lsblk
1
2
3
4
5
6
7
8
root@dev215 [~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 99G 0 part
├─opencloudos-root 251:0 0 66.5G 0 lvm /
└─opencloudos-home 251:1 0 32.5G 0 lvm /home

扩容分区

1
yum install -y cloud-utils-growpart gdisk

MBR

1
LC_ALL=en_US.UTF-8 growpart /dev/vdb 1

GPT

1
LC_ALL=en_US.UTF-8 growpart /dev/sda 3

出现以 CHANGED 开头的信息表示扩容成功。

1
2
3
4
5
6
7
8
9
10
11
root@dev215 [/data]# growpart /dev/sda 3
CHANGED: partition=3 start=2101248 old: size=207611904 end=209713151 new: size=417329119 end=419430366

root@dev215 [/data]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 199G 0 part
├─opencloudos-root 251:0 0 66.5G 0 lvm /
└─opencloudos-home 251:1 0 32.5G 0 lvm /home

扩容逻辑卷

在上面的 lsblk 信息中,实际上根目录 //home 的类型都是 lvm

1
2
3
pvresize /dev/sda3
lvextend -L +100G /dev/opencloudos/root
xfs_growfs /
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
root@dev215 [/data]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 opencloudos lvm2 a-- <199.00g 100.00g

root@dev215 [/data]# pvresize /dev/sda3
Physical volume "/dev/sda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized

root@dev215 [/data]# lvdisplay
--- Logical volume ---
LV Path /dev/opencloudos/home
LV Name home
VG Name opencloudos
LV UUID Mrv5DQ-JeQx-mKdM-Vn9u-qOtw-Yl9k-TbkTjH
LV Write Access read/write
LV Creation host, time dev215, 2024-11-01 10:35:13 +0800
LV Status available
# open 1
LV Size <32.48 GiB
Current LE 8314
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1

--- Logical volume ---
LV Path /dev/opencloudos/root
LV Name root
VG Name opencloudos
LV UUID pmaZSg-3JnI-yXfm-8N5Z-arW7-mstK-P0Atir
LV Write Access read/write
LV Creation host, time dev215, 2024-11-01 10:35:14 +0800
LV Status available
# open 1
LV Size <66.52 GiB
Current LE 17029
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0

root@dev215 [/data]# lvextend -L +100G /dev/opencloudos/root
Size of logical volume opencloudos/root changed from <66.52 GiB (17029 extents) to <166.52 GiB (42629 extents).
Logical volume opencloudos/root successfully resized.

root@dev215 [/data]# lvdisplay
--- Logical volume ---
LV Path /dev/opencloudos/home
LV Name home
VG Name opencloudos
LV UUID Mrv5DQ-JeQx-mKdM-Vn9u-qOtw-Yl9k-TbkTjH
LV Write Access read/write
LV Creation host, time dev215, 2024-11-01 10:35:13 +0800
LV Status available
# open 1
LV Size <32.48 GiB
Current LE 8314
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:1

--- Logical volume ---
LV Path /dev/opencloudos/root
LV Name root
VG Name opencloudos
LV UUID pmaZSg-3JnI-yXfm-8N5Z-arW7-mstK-P0Atir
LV Write Access read/write
LV Creation host, time dev215, 2024-11-01 10:35:14 +0800
LV Status available
# open 1
LV Size <166.52 GiB
Current LE 42629
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 251:0

root@dev215 [/data]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 opencloudos lvm2 a-- <199.00g 0

扩容文件系统

ext*

1
resize2fs /dev/vdb1

xfs

1
2
yum install -y xfsprogs
xfs_growfs /mnt

Ref

  • https://web.archive.org/web/20241113015538/https://help.aliyun.com/zh/ecs/user-guide/extend-the-partitions-and-file-systems-of-disks-on-a-linux-instance
  • https://web.archive.org/web/20241113015851/https://help.aliyun.com/zh/ecs/use-cases/extend-an-lv-by-using-lvm

搭配 MybatisPlusIEnum 类型,优化枚举显示

修改前

修改前

修改后

修改后

EnumConverter

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import cn.hutool.core.convert.Convert;
import cn.hutool.core.util.ReflectUtil;
import com.baomidou.mybatisplus.annotation.IEnum;
import com.fasterxml.jackson.databind.JavaType;
import com.fasterxml.jackson.databind.ObjectMapper;
import io.swagger.v3.core.converter.AnnotatedType;
import io.swagger.v3.core.converter.ModelConverter;
import io.swagger.v3.core.converter.ModelConverterContext;
import io.swagger.v3.oas.models.media.Schema;
import lombok.RequiredArgsConstructor;
import org.springdoc.core.providers.ObjectMapperProvider;
import org.springframework.stereotype.Component;

import java.util.Iterator;

@Component
@RequiredArgsConstructor
public class EnumConverter implements ModelConverter {

private final ObjectMapperProvider objectMapperProvider;

@SuppressWarnings({"unchecked", "rawtypes"})
@Override
public Schema<?> resolve(AnnotatedType type, ModelConverterContext context, Iterator<ModelConverter> chain) {
Schema<?> nextSchema = chain.hasNext() ? chain.next().resolve(type, context, chain) : null;
ObjectMapper objectMapper = objectMapperProvider.jsonMapper();
JavaType javaType = objectMapper.constructType(type.getType());
if (javaType != null && javaType.isEnumType()) {
Class<Enum> enumClass = (Class<Enum>) javaType.getRawClass();
Enum[] enums = enumClass.getEnumConstants();
StringBuilder builder = new StringBuilder();
for (Enum en : enums) {
if (en instanceof IEnum) {
var value = ReflectUtil.getFieldValue(en, "value");
var description = ReflectUtil.getFieldValue(en, "description");
builder.append(String.format("- %s: %s\n", Convert.toStr(value), Convert.toStr(description)));
}
}
if (nextSchema != null && !builder.isEmpty()) {
nextSchema.setTitle(nextSchema.getDescription());
nextSchema.setDescription(builder.toString());
}
}
return nextSchema;
}

}

因为 mp 只能在传入 Entity 的情况下自动填充字段。

MybatisMetaAspect

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import cn.hutool.core.util.ReflectUtil;
import com.baomidou.mybatisplus.core.conditions.AbstractWrapper;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

@Slf4j
@Aspect
@Component
@RequiredArgsConstructor
public class MybatisMetaAspect {

private final ConcurrentMap<Class<?>, Object> entityMap = new ConcurrentHashMap<>();

@Pointcut("execution(* com.baomidou.mybatisplus.core.mapper.BaseMapper.update(com.baomidou.mybatisplus.core.conditions.Wrapper))")
public void update() {
}

@Around("update()")
public Object aroundUpdate(ProceedingJoinPoint pjp) {
try {
Object result = invoke(pjp);
if (result != null) {
return result;
}
return pjp.proceed(pjp.getArgs());
} catch (Throwable e) {
log.error("mybatis around update error", e);
}
return null;
}

private Object invoke(ProceedingJoinPoint pjp) {
Object[] args = pjp.getArgs();
if (args == null || args.length != 1) {
return null;
}
Object arg = args[0];
if (arg instanceof AbstractWrapper<?, ?, ?> wrapper) {
Class<?> entityClass = wrapper.getEntityClass();
if (entityClass == null) {
log.warn("Detected that the entity is empty, which will cause automatic filling to fail. Please use `new UpdateWrapper<>(new T())` or `new UpdateWrapper<>(T.class)` instead.");
return null;
}
Object entity = entityMap.get(entityClass);
if (entity == null) {
entity = ReflectUtil.newInstance(entityClass);
entityMap.put(entityClass, entity);
}
return ReflectUtil.invoke(pjp.getThis(), "update", entity, arg);
}
return null;
}

}

RedisAppender

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.classic.spi.IThrowableProxy;
import ch.qos.logback.classic.spi.ThrowableProxyUtil;
import ch.qos.logback.core.UnsynchronizedAppenderBase;
import cn.hutool.core.util.StrUtil;
import cn.hutool.json.JSONUtil;
import io.lettuce.core.RedisClient;
import io.lettuce.core.RedisURI;
import io.lettuce.core.api.async.RedisAsyncCommands;
import lombok.Getter;
import lombok.Setter;

import java.time.Duration;
import java.util.Date;

@Getter
@Setter
public class RedisAppender extends UnsynchronizedAppenderBase<ILoggingEvent> {

private String host;
private Integer port;
private String password;
private String key;

private RedisClient redisClient;
private RedisAsyncCommands<String, String> async;

@Override
protected void append(ILoggingEvent event) {
if (redisClient == null) {
return;
}
LoggerEntity entity = new LoggerEntity();
entity.setTimestamp(new Date(event.getTimeStamp()));
entity.setLevel(event.getLevel().toString());
entity.setCaller(event.getLoggerName());
entity.setThread(event.getThreadName());
entity.setMessage(event.getFormattedMessage());
IThrowableProxy throwableProxy = event.getThrowableProxy();
if (throwableProxy != null) {
entity.setThrowable(ThrowableProxyUtil.asString(throwableProxy));
}
async.rpush(key, JSONUtil.toJsonStr(entity));
}

@Override
public void start() {
super.start();
initRedis();
}

private void initRedis() {
if (StrUtil.isEmpty(host) || StrUtil.isEmpty(key)) {
return;
}
try {
RedisURI uri = RedisURI
.builder()
.withHost(host)
.withPort(port)
.withPassword(password.toCharArray())
.withTimeout(Duration.ofSeconds(10))
.build();
redisClient = RedisClient.create(uri);
async = redisClient.connect().async();
} catch (Exception e) {
System.out.printf("Initialize Logger Redis Exception: %s\n", e.getMessage());
}
}

@Override
public void stop() {
closeRedis();
super.stop();
}

private void closeRedis() {
if (redisClient != null) {
redisClient.close();
}
}

}

logback-spring.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<?xml version="1.0" encoding="UTF-8"?>
<configuration>

<include resource="org/springframework/boot/logging/logback/base.xml"/>

<appender name="REDIS" class="xxx.RedisAppender">
<host>${LOGGER_REDIS_HOST:-}</host>
<port>${LOGGER_REDIS_PORT:-6379}</port>
<password>${LOGGER_REDIS_PASSWORD:-}</password>
<key>${LOGGER_REDIS_KEY:-log:test}</key>
</appender>

<root level="info">
<appender-ref ref="CONSOLE"/>
<!-- <appender-ref ref="FILE"/> -->
<appender-ref ref="REDIS"/>
</root>

</configuration>

logstash

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
input {
redis {
id => "test"
host => "xxxx"
port => "xxxx"
password => "p1ssw0rd"
data_type => "list"
threads => 4
key => "log:test"
type => "test"
}
}

filter {
date {
match => ["ts", "UNIX", "UNIX_MS", "ISO8601"]
timezone => "Asia/Shanghai"
target => "@timestamp"
remove_field => ["ts"]
}
ruby {
code => '
if event.get("msg") and event.get("msg").length > 512*1024
event.set("msg", "IGNORED LARGE MSG")
end
'
}
}

output {
elasticsearch {
id => "log"
hosts => "es-master:9200"
user => "logstash_internal"
password => "${LOGSTASH_INTERNAL_PASSWORD}"
index => "logstash-%{type}-%{+YYYY.MM.dd}"
}
}