mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 1 participants
  • 5235 discussions
[PATCH compass-ci 2/4] src/scheduler/kernel_params.cr: add param pv_device into dracut initrd for local mount
by Xu Xijian 31 Mar '21

31 Mar '21
Signed-off-by: Xu Xijian <hdxuxijian(a)163.com> --- src/scheduler/kernel_params.cr | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/scheduler/kernel_params.cr b/src/scheduler/kernel_params.cr index 8ad5029..9e2f58b 100644 --- a/src/scheduler/kernel_params.cr +++ b/src/scheduler/kernel_params.cr @@ -9,7 +9,8 @@ class Job os_info = "#{os}_#{os_arch}_#{os_version}" use_root_partition = "/dev/mapper/os-#{os_info}_#{src_lv_suffix}" if @hash["src_lv_suffix"]? != nil save_root_partition = "/dev/mapper/os-#{os_info}_#{boot_lv_suffix}" if @hash["boot_lv_suffix"]? != nil - return "#{common_params} local use_root_partition=#{use_root_partition} save_root_partition=#{save_root_partition} rw" + pv_device = "#{self.pv_device}" if @hash["pv_device"]? != nil + return "#{common_params} local use_root_partition=#{use_root_partition} save_root_partition=#{save_root_partition} pv_device=#{pv_device} rw" end private def kernel_custom_params -- 2.23.0
2 2
0 0
[PATCH compass-ci 4/4] container/dracut-initrd: enable creating pv and vg for lv to support local mount in HW machine
by Xu Xijian 31 Mar '21

31 Mar '21
[why] If a HW machine has no physical volume or volume group for logical volume, we should support to create them by job param, this param named as pv_device(default empty and do nothing). If a usable $pv_device is given, dracut will check the device and ensure pv and vg. Usage: submit job.yaml ... pv_device=/dev/sda Signed-off-by: Xu Xijian <hdxuxijian(a)163.com> --- .../dracut-initrd/bin/set-local-sysroot.sh | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/container/dracut-initrd/bin/set-local-sysroot.sh b/container/dracut-initrd/bin/set-local-sysroot.sh index 835bdd2..2a07270 100644 --- a/container/dracut-initrd/bin/set-local-sysroot.sh +++ b/container/dracut-initrd/bin/set-local-sysroot.sh @@ -14,9 +14,36 @@ analyse_kernel_cmdline_params() { sync_src_lv() { local src_lv="$1" + local vg_name="os" [ -e "$src_lv" ] && return + # need create volume group, usually in first use of this machine. $pv_device e.g. /dev/sda + pv_device="$(getarg pv_device=)" + [ -n "$pv_device" ] && { + [ -b "$pv_device" ] || { + echo "warn dracut: FATAL: device not found: $pv_device, reboot" + reboot + } + + # ensure the physical disk has been initialized as physical volume + real_pv_device="$(lvm pvs | grep -w $pv_device | awk '{print $1}')" + [ "$real_pv_device" = "$pv_device" ] || { + lvm pvcreate "$pv_device" || reboot + } + + # ensure the volume group $vg_name exists + real_vg_name="$(lvm pvs | grep -w $vg_name | awk '{print $2}')" + [ "$real_vg_name" = "$vg_name" ] || { + lvm vgcreate "$vg_name" "$pv_device" || reboot + } + } + + lvm vgs "$vg_name" || { + echo "warn dracut: FATAL: vg os not found, reboot" + reboot + } + # create logical volume src_lv_devname="$(basename $src_lv)" lvm lvcreate -y -L 10G --name "${src_lv_devname#os-}" os -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci 2/2] container/es: upgrade version
by Wu Zhende 31 Mar '21

31 Mar '21
Upgrade to 7.11.1. Matches the version of kibana so that data can be displayed on kibana. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- container/es/Dockerfile | 48 ++++++++++++-------------------------- container/es/build | 4 ++-- container/es/start | 2 +- container/logging-es/build | 8 +++---- container/logging-es/start | 2 +- 5 files changed, 23 insertions(+), 41 deletions(-) diff --git a/container/es/Dockerfile b/container/es/Dockerfile index ed02490..44d87ee 100644 --- a/container/es/Dockerfile +++ b/container/es/Dockerfile @@ -1,41 +1,24 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM alpine:3.11 +FROM elasticsearch:7.11.1@sha256:d52cda1e73d1b1915ba2d76ca1e426620c7b5d6942d9d2f432259503974ba786 ARG MEMORY -RUN sed -ri.origin 's|^https?://dl-cdn.alpinelinux.org|http://mirrors.huaweicloud.com|g' /etc/apk/repositories - -RUN apk add --no-cache elasticsearch curl - -RUN rm -rf /etc/init.d/elasticsearch \ - && rm -rf /usr/share/java/elasticsearch/plugins \ - && mv /usr/share/java/elasticsearch /usr/share/es \ - && echo "===> Creating Elasticsearch Paths..." \ - && for path in \ - /srv/es \ - /usr/share/es/logs \ - /usr/share/es/config \ - /usr/share/es/config/scripts \ - /usr/share/es/tmp \ - /usr/share/es/plugins \ - ; do \ - mkdir -p "$path"; \ - done \ - && cp /etc/elasticsearch/*.* /usr/share/es/config \ - && chown -R 1090:1090 /usr/share/es \ - && chown -R 1090:1090 /srv/es; - -RUN sed -i 's:#path.data\: /path/to/data:path.data\: /srv/es:' /usr/share/es/config/elasticsearch.yml; -RUN sed -i 's:#network.host\: _site_:network.host\: 0.0.0.0:' /usr/share/es/config/elasticsearch.yml; -RUN sed -i "s/-Xms256m/-Xms${MEMORY}m/g" /usr/share/es/config/jvm.options -RUN sed -i "s/-Xmx256m/-Xmx${MEMORY}m/g" /usr/share/es/config/jvm.options - -WORKDIR /usr/share/es - -ENV PATH /usr/share/es/bin:$PATH -ENV ES_TMPDIR /usr/share/es/tmp +RUN sed -i 's:#network.host\: _site_:network.host\: 0.0.0.0:' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a path.data: /srv/es' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a node.name: node-1' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a cluster.initial_master_nodes: ["node-1"]' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i "s/-Xms256m/-Xms${MEMORY}m/g" /usr/share/elasticsearch/config/jvm.options && \ + sed -i "s/-Xmx256m/-Xmx${MEMORY}m/g" /usr/share/elasticsearch/config/jvm.options + +RUN mkdir /usr/share/elasticsearch/tmp && \ + chown -R 1090:1090 /usr/share/elasticsearch + +WORKDIR /usr/share/elasticsearch + +ENV PATH /usr/share/elasticsearch/bin:$PATH +ENV ES_TMPDIR /usr/share/elasticsearch/tmp VOLUME ["/srv/es"] @@ -43,4 +26,3 @@ EXPOSE 9200 9300 USER 1090 CMD ["elasticsearch"] - diff --git a/container/es/build b/container/es/build index c7d7115..db5145f 100755 --- a/container/es/build +++ b/container/es/build @@ -5,8 +5,8 @@ require_relative '../defconfig.rb' -docker_skip_rebuild "es643b:alpine311" +docker_skip_rebuild "es:7.11.1" available_memory = get_available_memory -system "docker build -t es643b:alpine311 --build-arg MEMORY=#{available_memory} --network=host ." +system "docker build -t es:7.11.1 --build-arg MEMORY=#{available_memory} --network=host ." diff --git a/container/es/start b/container/es/start index 67d6531..3aa9525 100755 --- a/container/es/start +++ b/container/es/start @@ -15,7 +15,7 @@ cmd=( -v /srv/es:/srv/es -v /etc/localtime:/etc/localtime:ro --name es-server01 - es643b:alpine311 + es:7.11.1 ) "${cmd[@]}" diff --git a/container/logging-es/build b/container/logging-es/build index 3be841a..b50830e 100755 --- a/container/logging-es/build +++ b/container/logging-es/build @@ -5,15 +5,15 @@ require_relative '../defconfig.rb' -docker_skip_rebuild "logging-es:7.6.2" +docker_skip_rebuild "logging-es:7.11.1" BASE_IMAGE_DICT = { - 'aarch64' => 'gagara/elasticsearch-oss-arm64:7.6.2', - 'x86_64' => 'elasticsearch:7.6.2' + 'aarch64' => 'elasticsearch:7.11.1@sha256:d52cda1e73d1b1915ba2d76ca1e426620c7b5d6942d9d2f432259503974ba786', + 'x86_64' => 'elasticsearch:7.11.1' }.freeze BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] available_memory = get_available_memory -system "docker build -t logging-es:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} --build-arg MEMORY=#{available_memory} ." +system "docker build -t logging-es:7.11.1 --build-arg BASE_IMAGE=#{BASE_IMAGE} --build-arg MEMORY=#{available_memory} ." diff --git a/container/logging-es/start b/container/logging-es/start index 9be7a5b..05ac46f 100755 --- a/container/logging-es/start +++ b/container/logging-es/start @@ -15,7 +15,7 @@ cmd=( -p 9302:9300 -v /srv/es/logging-es:/srv/es/logging-es --name logging-es - logging-es:7.6.2 + logging-es:7.11.1 ) "${cmd[@]}" -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci 1/2] container/kibana: upgrade and addition
by Wu Zhende 31 Mar '21

31 Mar '21
1. upgrade from 7.6.2 to 7.11.1 2. add a new kibana for data. one display log and one display data 3. delete logtrail plugin because it don't support in new kibana version Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- container/kibana-logging/Dockerfile | 12 ++++++++ container/kibana-logging/build | 16 ++++++++++ container/kibana-logging/start | 31 +++++++++++++++++++ container/kibana-logging/start-depends | 1 + container/kibana/Dockerfile | 10 ------- container/kibana/build | 8 ++--- container/kibana/logtrail.json | 41 -------------------------- container/kibana/start | 14 ++++----- container/kibana/start-depends | 2 +- 9 files changed, 72 insertions(+), 63 deletions(-) create mode 100644 container/kibana-logging/Dockerfile create mode 100755 container/kibana-logging/build create mode 100755 container/kibana-logging/start create mode 100755 container/kibana-logging/start-depends delete mode 100644 container/kibana/logtrail.json diff --git a/container/kibana-logging/Dockerfile b/container/kibana-logging/Dockerfile new file mode 100644 index 0000000..970eb5a --- /dev/null +++ b/container/kibana-logging/Dockerfile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +ARG BASE_IMAGE + +FROM ${BASE_IMAGE} + +# docker image borrowed from hub.docker.com/r/gagara/kibana-oss-arm64 + +MAINTAINER Wu Zhende <wuzhende666(a)163.com> + +RUN sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml diff --git a/container/kibana-logging/build b/container/kibana-logging/build new file mode 100755 index 0000000..9c75aeb --- /dev/null +++ b/container/kibana-logging/build @@ -0,0 +1,16 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require_relative '../defconfig' + +docker_skip_rebuild "kibana:7.11.1" + +BASE_IMAGE_DICT = { + 'aarch64' => 'jamesgarside/kibana:7.11.1', + 'x86_64' => 'kibana:7.11.1' }.freeze + +BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] + +system "docker build -t kibana:7.11.1 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." diff --git a/container/kibana-logging/start b/container/kibana-logging/start new file mode 100755 index 0000000..f2169a6 --- /dev/null +++ b/container/kibana-logging/start @@ -0,0 +1,31 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require 'set' +require_relative '../defconfig.rb' + +names = Set.new %w[ + LOGGING_ES_HOST + LOGGING_ES_PORT +] + +defaults = relevant_defaults(names) +LOGGING_ES_HOST = defaults['LOGGING_ES_HOST'] || '172.17.0.1' +LOGGING_ES_PORT = defaults['LOGGING_ES_PORT'] || '9202' + +docker_rm 'kibana-logging' + +cmd = %W[ + docker run + --restart=always + --name kibana-logging + -v /etc/localtime:/etc/localtime:ro + -d + -e ELASTICSEARCH_HOSTS=http://#{LOGGING_ES_HOST}:#{LOGGING_ES_PORT} + -p 20000:5601 + kibana:7.11.1 +] + +system(*cmd) diff --git a/container/kibana-logging/start-depends b/container/kibana-logging/start-depends new file mode 100755 index 0000000..66c1996 --- /dev/null +++ b/container/kibana-logging/start-depends @@ -0,0 +1 @@ +logging-es diff --git a/container/kibana/Dockerfile b/container/kibana/Dockerfile index 60b889d..970eb5a 100644 --- a/container/kibana/Dockerfile +++ b/container/kibana/Dockerfile @@ -10,13 +10,3 @@ FROM ${BASE_IMAGE} MAINTAINER Wu Zhende <wuzhende666(a)163.com> RUN sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml - -USER root -RUN yum -y install wget \ - && wget https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-7.… -O /logtrail-7.6.2-0.1.31.zip - -USER 1090 -RUN ./bin/kibana-plugin install file:///logtrail-7.6.2-0.1.31.zip \ - && rm -rf /tmp/logtrail-7.6.2-0.1.31.zip - -COPY --chown=1090:1090 logtrail.json /usr/share/kibana/plugins/logtrail/logtrail.json diff --git a/container/kibana/build b/container/kibana/build index 9a1fb6d..9c75aeb 100755 --- a/container/kibana/build +++ b/container/kibana/build @@ -5,12 +5,12 @@ require_relative '../defconfig' -docker_skip_rebuild "kibana:7.6.2" +docker_skip_rebuild "kibana:7.11.1" BASE_IMAGE_DICT = { - 'aarch64' => 'gagara/kibana-oss-arm64:7.6.2', - 'x86_64' => 'kibana:7.6.2' }.freeze + 'aarch64' => 'jamesgarside/kibana:7.11.1', + 'x86_64' => 'kibana:7.11.1' }.freeze BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] -system "docker build -t kibana:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." +system "docker build -t kibana:7.11.1 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." diff --git a/container/kibana/logtrail.json b/container/kibana/logtrail.json deleted file mode 100644 index c63f2ac..0000000 --- a/container/kibana/logtrail.json +++ /dev/null @@ -1,41 +0,0 @@ -{ - "version": 2, - "index_patterns": [{ - "es": { - "default_index": "*", - "allow_url_parameter": false, - "timezone": "CST" - }, - "tail_interval_in_seconds": 10, - "es_index_time_offset_in_seconds": 0, - "display_timezone": "CST", - "display_timestamp_format": "YYYY MMM DD HH:mm:ss", - "max_buckets": 500, - "nested_objects": false, - "default_time_range_in_days": 30, - "max_hosts": 100, - "max_events_to_keep_in_viewer": 10000, - "default_search": "", - "fields": { - "mapping": { - "timestamp": "time", - "display_timestamp" : "time", - "hostname": "container_name", - "program": "tags", - "message": "log" - }, - "message_format": "{{{log}}}", - "keyword_suffix": "keyword" - }, - "color_mapping": { - "field": "level", - "mapping": { - "ERROR": "#FF0000", - "WARN": "#FFEF96", - "DEBUG": "#B5E7A0", - "TRACE": "#CFE0E8", - "INFO": "#339999" - } - } - }] -} diff --git a/container/kibana/start b/container/kibana/start index 26a4944..63cb8f8 100755 --- a/container/kibana/start +++ b/container/kibana/start @@ -7,13 +7,13 @@ require 'set' require_relative '../defconfig.rb' names = Set.new %w[ - LOGGING_ES_HOST - LOGGING_ES_PORT + ES_HOST + ES_PORT ] defaults = relevant_defaults(names) -LOGGING_ES_HOST = defaults['LOGGING_ES_HOST'] || '172.17.0.1' -LOGGING_ES_PORT = defaults['LOGGING_ES_PORT'] || '9202' +ES_HOST = defaults['ES_HOST'] || '172.17.0.1' +ES_PORT = defaults['ES_PORT'] || '9200' docker_rm 'kibana' @@ -23,9 +23,9 @@ cmd = %W[ --name kibana -v /etc/localtime:/etc/localtime:ro -d - -e ELASTICSEARCH_HOSTS=http://#{LOGGING_ES_HOST}:#{LOGGING_ES_PORT} - -p 20000:5601 - kibana:7.6.2 + -e ELASTICSEARCH_HOSTS=http://#{ES_HOST}:#{ES_PORT} + -p 20017:5601 + kibana:7.11.1 ] system(*cmd) diff --git a/container/kibana/start-depends b/container/kibana/start-depends index 66c1996..8357fca 100755 --- a/container/kibana/start-depends +++ b/container/kibana/start-depends @@ -1 +1 @@ -logging-es +es -- 2.23.0
1 0
0 0
[PATCH compass-ci] lib/git_mirror.rb: fix no positive? method error
by Li Yuanchao 31 Mar '21

31 Mar '21
The result of query es changed, the result['hits']['total'] changes from a number to a hash, so the positive? method can not be used. Now use the count method which contained by es_client. Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- lib/git_mirror.rb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/git_mirror.rb b/lib/git_mirror.rb index c967c8b..7f36be3 100644 --- a/lib/git_mirror.rb +++ b/lib/git_mirror.rb @@ -414,9 +414,9 @@ class MirrorMain new_refs_count: {} } query = { query: { match: { _id: git_repo } } } - result = @es_client.search(index: 'repo', body: query)['hits'] - return fork_stat unless result['total'].positive? + return fork_stat unless @es_client.count(index: 'repo', body: query)['count'].positive? + result = @es_client.search(index: 'repo', body: query)['hits'] fork_stat.each_key do |key| fork_stat[key] = result['hits'][0]['_source'][key.to_s] || fork_stat[key] end -- 2.23.0
1 0
0 0
[PATCH compass-ci 2/2] container/es: upgrade version
by Wu Zhende 31 Mar '21

31 Mar '21
Upgrade to 7.11.1. Matches the version of kibana so that data can be displayed on kibana. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- container/es/Dockerfile | 48 ++++++++++++-------------------------- container/es/build | 4 ++-- container/es/start | 2 +- container/logging-es/build | 8 +++---- container/logging-es/start | 2 +- 5 files changed, 23 insertions(+), 41 deletions(-) diff --git a/container/es/Dockerfile b/container/es/Dockerfile index ed02490..44d87ee 100644 --- a/container/es/Dockerfile +++ b/container/es/Dockerfile @@ -1,41 +1,24 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM alpine:3.11 +FROM elasticsearch:7.11.1@sha256:d52cda1e73d1b1915ba2d76ca1e426620c7b5d6942d9d2f432259503974ba786 ARG MEMORY -RUN sed -ri.origin 's|^https?://dl-cdn.alpinelinux.org|http://mirrors.huaweicloud.com|g' /etc/apk/repositories - -RUN apk add --no-cache elasticsearch curl - -RUN rm -rf /etc/init.d/elasticsearch \ - && rm -rf /usr/share/java/elasticsearch/plugins \ - && mv /usr/share/java/elasticsearch /usr/share/es \ - && echo "===> Creating Elasticsearch Paths..." \ - && for path in \ - /srv/es \ - /usr/share/es/logs \ - /usr/share/es/config \ - /usr/share/es/config/scripts \ - /usr/share/es/tmp \ - /usr/share/es/plugins \ - ; do \ - mkdir -p "$path"; \ - done \ - && cp /etc/elasticsearch/*.* /usr/share/es/config \ - && chown -R 1090:1090 /usr/share/es \ - && chown -R 1090:1090 /srv/es; - -RUN sed -i 's:#path.data\: /path/to/data:path.data\: /srv/es:' /usr/share/es/config/elasticsearch.yml; -RUN sed -i 's:#network.host\: _site_:network.host\: 0.0.0.0:' /usr/share/es/config/elasticsearch.yml; -RUN sed -i "s/-Xms256m/-Xms${MEMORY}m/g" /usr/share/es/config/jvm.options -RUN sed -i "s/-Xmx256m/-Xmx${MEMORY}m/g" /usr/share/es/config/jvm.options - -WORKDIR /usr/share/es - -ENV PATH /usr/share/es/bin:$PATH -ENV ES_TMPDIR /usr/share/es/tmp +RUN sed -i 's:#network.host\: _site_:network.host\: 0.0.0.0:' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a path.data: /srv/es' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a node.name: node-1' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i '$a cluster.initial_master_nodes: ["node-1"]' /usr/share/elasticsearch/config/elasticsearch.yml && \ + sed -i "s/-Xms256m/-Xms${MEMORY}m/g" /usr/share/elasticsearch/config/jvm.options && \ + sed -i "s/-Xmx256m/-Xmx${MEMORY}m/g" /usr/share/elasticsearch/config/jvm.options + +RUN mkdir /usr/share/elasticsearch/tmp && \ + chown -R 1090:1090 /usr/share/elasticsearch + +WORKDIR /usr/share/elasticsearch + +ENV PATH /usr/share/elasticsearch/bin:$PATH +ENV ES_TMPDIR /usr/share/elasticsearch/tmp VOLUME ["/srv/es"] @@ -43,4 +26,3 @@ EXPOSE 9200 9300 USER 1090 CMD ["elasticsearch"] - diff --git a/container/es/build b/container/es/build index c7d7115..db5145f 100755 --- a/container/es/build +++ b/container/es/build @@ -5,8 +5,8 @@ require_relative '../defconfig.rb' -docker_skip_rebuild "es643b:alpine311" +docker_skip_rebuild "es:7.11.1" available_memory = get_available_memory -system "docker build -t es643b:alpine311 --build-arg MEMORY=#{available_memory} --network=host ." +system "docker build -t es:7.11.1 --build-arg MEMORY=#{available_memory} --network=host ." diff --git a/container/es/start b/container/es/start index 67d6531..3aa9525 100755 --- a/container/es/start +++ b/container/es/start @@ -15,7 +15,7 @@ cmd=( -v /srv/es:/srv/es -v /etc/localtime:/etc/localtime:ro --name es-server01 - es643b:alpine311 + es:7.11.1 ) "${cmd[@]}" diff --git a/container/logging-es/build b/container/logging-es/build index 3be841a..b50830e 100755 --- a/container/logging-es/build +++ b/container/logging-es/build @@ -5,15 +5,15 @@ require_relative '../defconfig.rb' -docker_skip_rebuild "logging-es:7.6.2" +docker_skip_rebuild "logging-es:7.11.1" BASE_IMAGE_DICT = { - 'aarch64' => 'gagara/elasticsearch-oss-arm64:7.6.2', - 'x86_64' => 'elasticsearch:7.6.2' + 'aarch64' => 'elasticsearch:7.11.1@sha256:d52cda1e73d1b1915ba2d76ca1e426620c7b5d6942d9d2f432259503974ba786', + 'x86_64' => 'elasticsearch:7.11.1' }.freeze BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] available_memory = get_available_memory -system "docker build -t logging-es:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} --build-arg MEMORY=#{available_memory} ." +system "docker build -t logging-es:7.11.1 --build-arg BASE_IMAGE=#{BASE_IMAGE} --build-arg MEMORY=#{available_memory} ." diff --git a/container/logging-es/start b/container/logging-es/start index 9be7a5b..05ac46f 100755 --- a/container/logging-es/start +++ b/container/logging-es/start @@ -15,7 +15,7 @@ cmd=( -p 9302:9300 -v /srv/es/logging-es:/srv/es/logging-es --name logging-es - logging-es:7.6.2 + logging-es:7.11.1 ) "${cmd[@]}" -- 2.23.0
2 1
0 0
[PATCH lkp-tests] stat: optimize to obtain error for build DockerFile
by Liu Shaofei 31 Mar '21

31 Mar '21
Signed-off-by: Liu Shaofei <370072077(a)qq.com> --- stats/openeuler_docker.rb | 102 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100755 stats/openeuler_docker.rb diff --git a/stats/openeuler_docker.rb b/stats/openeuler_docker.rb new file mode 100755 index 000000000..c4a2cd691 --- /dev/null +++ b/stats/openeuler_docker.rb @@ -0,0 +1,102 @@ +#!/usr/bin/env ruby + +def pre_handel(result, file_path) + status = false + repo_set = Set[] + sys_set = Set[] + + File.readlines(file_path).each do |line| + case line.chomp! + # Error: Unable to find a match: docker-registry mock xx + when /Error: Unable to find a match: (.+)/ + $1.split.each do |repo| + repo_set << repo + end + + # RUN yum -y swap -- remove fakesystemd -- install systemd systemd-libs + # yum swap: error: unrecognized arguments: install systemd systemd-libs + when /yum swap: error: .*: install (.+)/ + $1.split.each do |sys| + sys_set << sys + end + + # curl: (22) The requested URL returned error: 404 Not Found + # error: skipping https://dl.fedoraproject.org/pub/epel/bash-latest-7.noarch.rpm - transfer failed + when /.*error: .* (https.*)/ + result['requested-URL-returned.error'] = [1] + result['requested-URL-returned.error.message'] = [line.to_s] + status = true + + # Error: Unknown repo: 'powertools' + when /Error: Unknown repo: (.+)/ + repo = $1.delete!("'") + result["unknown-repo.#{repo}"] = [1] + result["unknown-repo.#{repo}.message"] = [line.to_s] + status = true + + # Error: Module or Group 'convert' does not exist. + when /Error: Module or Group ('[^\s]+')/ + repo = $1.delete!("'") + result["error.not-exist-module-or-group.#{repo}"] = [1] + result["error.not-exist-module-or-group.#{repo}.message"] = [line.to_s] + status = true + # /bin/sh: passwd: command not found + when /\/bin\/sh: (.+): command not found/ + result["sh.command-not-found.#{$1}"] = [1] + result["sh.command-not-found.#{$1}.message"] = [line.to_s] + status = true + end + + repo_set.each do |repo| + result["yum.error.Unable-to-find-a-match.#{repo}"] = [1] + result["yum.error.Unable-to-find-a-match.#{repo}.message"] = ["Error: Unable to find a match #{repo}"] + status = true + end + + sys_set.each do |sys| + result["yum.swap.error.unrecognized-arguments-install.#{sys}"] = [1] + result["yum.swap.error.unrecognized-arguments.#{sys}.message"] = + ["yum swap: error: unrecognized arguments install #{sys}"] + status = true + end + end + status +end + +def handle_unknown_error(_result, file_path) + line_num = %x(cat #{file_path} | grep -n 'Step ' | tail -1 | awk -F: '{print $1}') + + index = 1 + message = '' + File.readlines(file_path).each do |line| + if index == Integer(line_num) + message += line + else + index += 1 + next + end + end + + message = $1 if message =~ %r(\u001b\[91m(.+)) + message +end + +def openeuler_docker(log_lines) + result = Hash.new { |hash, key| hash[key] = [] } + + log_lines.each do |line| + next unless line =~ %r(([^\s]+).(build|run)\.fail) + + key, value = line.split(':') + key.chomp! + result[key] << value.to_i + + file_path = "#{RESULT_ROOT}/#{$1}" # $1 named by docker-image name + next unless File.exist?(file_path) + next if pre_handel(result, file_path) + + result["#{key}.message"] << handle_unknown_error(result, file_path) + end + + result +end -- 2.23.0
1 0
0 0
[PATCH lkp-tests] stats/ansible_test: optimize the output of error_id
by Wang Chenglong 31 Mar '21

31 Mar '21
[Why] optimize the output of error_id standard format: @module_name:error_id before: error.[message].Could-not-find-or-access-RedHat-20-yml-Searched-in-root-ansible-roles-ansible-role-postgresql-vars-RedHat-20-yml-root-ansible- roles-ansible-role-postgresql-RedHat-20-yml-root-ansible-roles-ansible-role-postgresql-tasks-vars-RedHat-20-yml-root-ansible-roles-ansible-role- postgresql-tasks-RedHat-20-yml-root-ansible-vars-RedHat-20-yml-root-ansible-RedHat-20-yml-on-the-Ansible-Controller-If-you-are-using-a-module- and-expect-the-file-to-exist-on-the-remote-see-the-remote_src-option: 1 after: @inculde_vars:Could-not-find-or-access-RedHat-20-yml: 1 Signed-off-by: Wang Chenglong <18509160991(a)163.com> --- stats/ansible_test | 99 ++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 83 insertions(+), 16 deletions(-) diff --git a/stats/ansible_test b/stats/ansible_test index 3d3b0b0cb..cadc11786 100755 --- a/stats/ansible_test +++ b/stats/ansible_test @@ -1,40 +1,107 @@ #!/usr/bin/env ruby require 'json' +require 'yaml' -def output_error(error_msg, error_key) - return if error_msg.nil? - +def output_error(error_msg) if error_msg.is_a? Array error_msg.each do |i| error_id = common_error_id i - puts "error.[#{error_key}].#{error_id}: 1" - puts "error.#{error_id}.message: #{@ansible_failed_info}" + puts "@#{@module_name['action']}:#{error_id}: 1" + puts "@#{@module_name['action']}:#{error_id}.message: #{@ansible_failed_info}" end elsif error_msg.is_a? String error_id = common_error_id error_msg - puts "error.[#{error_key}].#{error_id}: 1" - puts "error.#{error_id}.message: #{@ansible_failed_info}" + puts "@#{@module_name['action']}:#{error_id}: 1" + puts "@#{@module_name['action']}:#{error_id}.message: #{@ansible_failed_info}" end end def common_error_id(line) - line.gsub!(/[^\w]/, '-') + line.gsub!(/[^\w]/, '--') + line.gsub!(/[0-9]{3,}/, '#') line.gsub!(/-+/, '-') # Replace multiple consecutive '-' with a single one line.gsub!(/^-|-$/, '') line end +def parse_msg(ansible_failed_json) + case ansible_failed_json['msg'] + when /package (.*64)/ + output_error "cannot install the #{$1} for the job" + when /(The error appears to be in .*')/ + output_error $1 + when /(Failed to import the required Python library .*python3\.)/ + output_error $1 + # when /(An unhandled exception occurred while running the lookup plugin .*'\.)/ + when /original message: (.*)/ + output_error $1 + when /(Unable to start service (\w{1,}):)/ + output_error $1 + when /(Failed to download metadata for repo .*'):/ + output_error $1 + when /(Failed to download packages: (.*):)/ + output_error $2 + when /(Failed to find required .* in paths):/ + output_error $1 + when /(Failure downloading .*),/ + output_error $1 + when /(Could not find or access.*)'/ + output_error $1 + when /(Unsupported parameters) for/ + output_error $1 + when /aajdhch/ + output_error $1 + when /non-zero return code|Failed to install some of the specified packages/ + return + else + output_error ansible_failed_json['msg'] + end +end + +def parse_stderr(ansible_failed_json) + case ansible_failed_json['stderr'] + when /^$/ + return + else + ansible_failed_json['stderr_lines'].each do |i| + output_error i + end + end +end + +def parse_message(ansible_failed_json) + case ansible_failed_json['message'] + when /(Could not find or access.*)'/ + output_error $1 + else + output_error ansible_failed_json['message'] + end +end + +def parse_failures(ansible_failed_json) + ansible_failed_json['failures'].each do |i| + case i + when /(.*)/ + output_error $1 + end + end +end + while (line = STDIN.gets) - next unless line =~ /(FAILED!|failed:).*=>(.*)/ + case line + when /({'action'.*})/ + @module_name = YAML.load($1) + when /(FAILED!|failed:).*=>(.*)/ - @ansible_failed_info = $2 - next if @ansible_failed_info.empty? + @ansible_failed_info = $2 + next if @ansible_failed_info.empty? - ansible_failed_json = JSON.parse @ansible_failed_info + ansible_failed_json = JSON.parse @ansible_failed_info - output_error ansible_failed_json['msg'],'msg' - output_error ansible_failed_json['message'],'message' - output_error ansible_failed_json['cmd'],'cmd' - output_error ansible_failed_json['failures'],'failures' + parse_msg ansible_failed_json unless ansible_failed_json['msg'].nil? + parse_stderr ansible_failed_json unless ansible_failed_json['stderr'].nil? + parse_message ansible_failed_json unless ansible_failed_json['message'].nil? + parse_failures ansible_failed_json unless ansible_failed_json['failures'].nil? + end end -- 2.23.0
1 0
0 0
[PATCH compass-ci 3/4] doc/job/os_mount.md: add the check pv_device step into document
by Xu Xijian 31 Mar '21

31 Mar '21
Signed-off-by: Xu Xijian <hdxuxijian(a)163.com> --- doc/job/os_mount.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/doc/job/os_mount.md b/doc/job/os_mount.md index eab95ed..b9b8308 100644 --- a/doc/job/os_mount.md +++ b/doc/job/os_mount.md @@ -47,6 +47,7 @@ The brief flow is as follows: ${boot_lv} -- boot logical volume: - will boot from this lv. - boot_lv=/dev/mapper/os-${os}_${os_arch}_${os_version} + - check whether ${pv_device} is given, if so, create physical volume(pv for short) and volume group(vg for short) on ${pv_device}. - if ${src_lv} not exists: create it, and rsync the rootfs from cluster nfs server. - if ${boot_lv} exists: delete it. - create ${boot_lv} as the snapshot of ${src_lv}. @@ -57,6 +58,7 @@ The brief flow is as follows: ## persistent rootfs data When you need to persist the rootfs data of a job, and use it in the subsequent job(s), two fields in `kernel_custom_params` will help you: 'src_lv_suffix', 'boot_lv_suffix'. +If you want to run on a brand new machine, you should use 'pv_device' to assign a disk device to create pv and vg. The brief flow is as follows: @@ -65,6 +67,7 @@ The brief flow is as follows: 2. initrd stage: use_root_partition seems like /dev/mapper/os-${os}_${os_arch}_${os_version}_${src_lv_suffix} save_root_partition seems like /dev/mapper/os-${os}_${os_arch}_${os_version}_${boot_lv_suffix} + check and ensure pv and vg exist for lv. - firstly, we need two logical volume: ${src_lv} -- src logical volume: - if have ${use_root_partition}, src_lv=${use_root_partition} @@ -95,6 +98,8 @@ Demo usage: data of job-20210218.yaml. Then you need add the follow field in your job-20210219.yaml: kernel_custom_params: src_lv_suffix=zhangsan_local_for_iperf_20210218 + - if you want to create pv and vg on /dev/sda, you can use like this: + kernel_custom_params: pv_device=/dev/sda ... ``` Notes: @@ -148,6 +153,9 @@ Notes: lvdisplay ${src_lv} > /dev/null && return + # need create volume group, usually in first use of this machine. $pv_device e.g. /dev/sda + [ -n "$pv_device" ] && { do some check and create pv and vg } + # create logical volume lvcreate --size 10G --name $(basename ${src_lv}) os || exit -- 2.23.0
1 0
0 0
[PATCH compass-ci 1/4] src/lib/job.cr: add initrd param pv_device to scheduler
by Xu Xijian 31 Mar '21

31 Mar '21
Signed-off-by: Xu Xijian <hdxuxijian(a)163.com> --- src/lib/job.cr | 1 + 1 file changed, 1 insertion(+) diff --git a/src/lib/job.cr b/src/lib/job.cr index 5767f61..3ba991d 100644 --- a/src/lib/job.cr +++ b/src/lib/job.cr @@ -90,6 +90,7 @@ class Job linux_vmlinuz_path src_lv_suffix boot_lv_suffix + pv_device ) macro method_missing(call) -- 2.23.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • ...
  • 524
  • Older →

HyperKitty Powered by HyperKitty