docker-compose部署elk, 收集springboot工程日志
执行完上面的命令后,会生成/usr/share/elasticsearch下生成elastic-certificates.p12证书文件,将证书拷贝出来,之后需要挂载到容器。在/data/elk/logstash/config下创建logstash.conf。在/data/elk/logstash/config下创建logstash.yml。在/data/elk/es/config下创建elast
部署elasticsearch
创建文件夹
mkdir -p /data/elk/es/data
mkdir -p /data/elk/es/config
chmod 777 /data/elk/es/data
准备证书
现随便启动一个ES
docker run --name elasticsearch \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
registry.cn-shanghai.aliyuncs.com/mamy-ns/elasticsearch:8.15.3
使用es自带的工具生成证书
docker exec -it elasticsearch /bin/bash
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
执行完上面的命令后,会生成/usr/share/elasticsearch下生成elastic-certificates.p12证书文件,将证书拷贝出来,之后需要挂载到容器
docker cp elasticsearch:/usr/share/elasticsearch/elastic-certificates.p12 /data/elk/es/config/
elasticsearch.yml
在/data/elk/es/config下创建elasticsearch.yml
network.host: 0.0.0.0
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/elastic-certificates.p12
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.audit.enabled: true
docker-compose.yml
在/data/elk/es下创建docker-compose.yml
version: '3.1'
services:
elasticsearch:
image: registry.cn-shanghai.aliyuncs.com/mamy-ns/elasticsearch:8.15.3
ports:
- "9200:9200"
- "9300:9300"
environment:
node.name: es
cluster.name: elasticsearch
discovery.type: single-node
ES_JAVA_OPTS: -Xms1024m -Xmx1024m
volumes:
- /data/elk/es/data:/usr/share/elasticsearch/data
- /data/elk/es/config/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
- /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
启动ES
在/data/elk/es执行如下命令,启动es
docker-compose up -d
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
87204bbdd91d registry.cn-shanghai.aliyuncs.com/mamy-ns/elasticsearch:8.15.3 "/bin/tini -- /usr/l…" 4 hours ago Up About an hour 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp es_elasticsearch_1
修改密码
docker exec -it 87204bbdd91d /bin/bash
./bin/elasticsearch-setup-password interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
......
# 后面所有的密码都设置为123456 (这里自己去设置)
创建新用户
#创建新账户
elasticsearch-users useradd sunny -p 123456
#给账户授权
elasticsearch-users roles -a superuser sunny
elasticsearch-users roles -a kibana_system sunny
访问到如下页面,说明ES启动成功
输入用户密码 sunny 123456
部署kibana
创建目录
mkdir /data/elk/kibana
docker-compose.yml
在/data/elk/kibana中创建docker-compose.yml
version: '3.1'
services:
kibana:
image: registry.cn-shanghai.aliyuncs.com/mamy-ns/kibana:8.15.3
container_name: kibana
environment:
- "TZ=Asia/Shanghai"
- "I18N_LOCALE=zh-CN"
- "ELASTICSEARCH_HOSTS=http://192.168.101.10:9200"
- "ELASTICSEARCH_USERNAME=sunny"
- "ELASTICSEARCH_PASSWORD=123456"
ports:
- "5601:5601"
启动kibana
docker-compose up -d
访问
登录账号 sunny 123456
部署logstash
创建文件夹
mkdir -p /data/elk/logstash/config
mkdir -p /data/elk/logstash/pipeline
logstash.yml
在/data/elk/logstash/config下创建logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.101.10:9200" ]
xpack.monitoring.elasticsearch.username: "sunny"
xpack.monitoring.elasticsearch.password: "123456"
logstash.conf
在/data/elk/logstash/config下创建logstash.conf
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5044
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["192.168.101.10:9200"]
index => "bdms--%{+YYYY.MM.dd}"
user => "sunny"
password => "123456"
}
}
docker-compose.yml
在/data/elk/logstash下创建docker-compose.yml
version: '3.1'
services:
logstash:
image: registry.cn-shanghai.aliyuncs.com/mamy-ns/logstash:8.4.3
container_name: logstash
hostname: logstash
privileged: true
ports:
- '9600:9600'
- '5044:5044'
volumes:
- /data/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- /data/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
启动logstash
docker-compose up -d
springboot集成logstash
springboot版本:3.2.6
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.2.6</version>
</parent>
<groupId>com.sunny</groupId>
<artifactId>excel</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>excel</name>
<description>excel</description>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
logback.xml
在logstash的appender中,配置logstash的日志input地址192.168.101.10:5044
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="60 seconds" debug="false">
<!-- 日志存放路径 -->
<property name="log.path" value="logs/emergency-bdms" />
<!-- 日志输出格式 -->
<property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{20} - [%method,%line] - %msg%n" />
<!-- 控制台输出 -->
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>${log.pattern}</pattern>
</encoder>
</appender>
<!-- logstash -->
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<param name="Encoding" value="UTF-8"/>
<destination>192.168.101.10:5044</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" >
<customFields>{"appname":"emergency-bdms"}</customFields>
</encoder>
</appender>
<!-- 系统模块日志级别控制 -->
<logger name="com.zdyj" level="info" />
<!-- Spring日志级别控制 -->
<logger name="org.springframework" level="warn" />
<root level="info">
<appender-ref ref="console" />
<appender-ref ref="logstash" />
</root>
</configuration>
编写一个接口,产生日志
@GetMapping("/info")
public void info() throws IOException {
log.warn("{}-ABCDEF", System.currentTimeMillis());
}
搞一些日志
kibana配置视图
将可选字段拖拽到右边表格中
最后结果
创作不易,您的赏识是我前进的动力!
更多推荐
所有评论(0)