mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 5230 discussions
[PATCH compass-ci] scheduler/find_job_boot.cr: update tbox state when testbox requesting
by Wei Jihui 03 Mar '21

03 Mar '21
tbox state is running when job interruption, this tbox will always rebooting if no new job, so update tbox state when it requests job every time. Signed-off-by: Wei Jihui <weijihuiall(a)163.com> --- src/lib/sched.cr | 1 + 1 file changed, 1 insertion(+) diff --git a/src/lib/sched.cr b/src/lib/sched.cr index bec90cd..51f493e 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -159,6 +159,7 @@ class Sched "time" => get_time, "deadline" => deadline } + @redis.update_wtmp(testbox.to_s, hash) @es.update_tbox(testbox.to_s, hash) end -- 2.23.0
1 0
0 0
[PATCH compass-ci] qemu/kvm.sh: use "command -v" to check before assigning value to qemu instead of assigning directly
by Lin Jiaxin 03 Mar '21

03 Mar '21
The original logic is to first assign a value to qemu and then check. This will cause qemu to be equal to a wrong value rather than "". [log] [INFO] 2021-03-03 15:05:09 qemu.sh: less /srv/cci/serial/logs/vm-2p16g.taishan200-2280-2s64p-256g--a10-0 /c/compass-ci/providers/qemu/kvm.sh: line 209: qemu-kvm: command not found Signed-off-by: Lin Jiaxin <ljx.joe(a)qq.com> --- providers/qemu/kvm.sh | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/providers/qemu/kvm.sh b/providers/qemu/kvm.sh index fdaf254..dcab81a 100755 --- a/providers/qemu/kvm.sh +++ b/providers/qemu/kvm.sh @@ -71,7 +71,9 @@ check_kernel() check_qemu() { # debian has both qemu-system-x86_64 and qemu-system-riscv64 command - [[ $kernel =~ 'riscv64' ]] && qemu=qemu-system-riscv64 + [[ $kernel =~ 'riscv64' ]] && { + command -v qemu-system-riscv64 > /dev/null && qemu=qemu-system-riscv64 + } } check_initrds() @@ -139,9 +141,12 @@ set_qemu() qemu-kvm ) - for qemu in "${qemus[@]}" + for qemu_candidate in "${qemus[@]}" do - command -v "$qemu" > /dev/null && break + command -v "$qemu_candidate" > /dev/null && { + qemu="$qemu_candidate" + break + } done check_qemu -- 2.23.0
1 0
0 0
[PATCH compass-ci] doc: correct the field which allow user add custom kernel params
by Yu Chuan 03 Mar '21

03 Mar '21
[Why] After search in code, 'kernel_custom_params' is the field which allow user add custom kernel param in job.yaml, not 'kernel_append_root'. [Reference] - src/scheduler/kernel_params.cr ``` private def kernel_custom_params return @hash["kernel_custom_params"] if @hash["kernel_custom_params"]? end private def set_kernel_params kernel_params_values = "#{kernel_common_params()} #{kernel_custom_params()} #{self.kernel_append_root} #{kernel_console()}" kernel_params_values = kernel_params_values.split(" ").map(&.strip()).reject!(&.empty?) @hash["kernel_params"] = JSON.parse(kernel_params_values.to_json) @hash["ipxe_kernel_params"] = JSON.parse(initrds_basename.to_json) end ``` Signed-off-by: Yu Chuan <13186087857(a)163.com> --- doc/job/os_mount.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/doc/job/os_mount.md b/doc/job/os_mount.md index f4b28bb4b4c7..0e57d9ea36db 100644 --- a/doc/job/os_mount.md +++ b/doc/job/os_mount.md @@ -56,7 +56,7 @@ The brief flow is as follows: ## persistent rootfs data -When you need to persist the rootfs data of a job, and use it in the subsequent job(s), two fields in `kernel_append_root` will help you: `save_root_partition`, `use_root_partition`. +When you need to persist the rootfs data of a job, and use it in the subsequent job(s), two fields in `kernel_custom_params` will help you: `save_root_partition`, `use_root_partition`. The brief flow is as follows: @@ -87,12 +87,12 @@ Demo usage: rootfs data of job-20210218.yaml so that it can be used by the subsequent jobs. Then you need add the follow field in your job-20210218.yaml: - kernel_append_root: save_root_partition=zhangsan_local_for_iperf_20210218 + kernel_custom_params: save_root_partition=zhangsan_local_for_iperf_20210218 - in 20210219, you submit a job-20210219.yaml, and you want to use the rootfs data of job-20210218.yaml. Then you need add the follow field in your job-20210219.yaml: - kernel_append_root: use_root_partition=zhangsan_local_for_iperf_20210218 + kernel_custom_params: use_root_partition=zhangsan_local_for_iperf_20210218 ``` Notes: @@ -109,7 +109,7 @@ Notes: os_arch: aarch64 os_version: 20.03 os_mount: local - kernel_append_root: use_root_partition=zhangsan_local_for_iperf_20210218 save_root_partition=zhangsan_local_for_iperf_20210219 + kernel_custom_params: use_root_partition=zhangsan_local_for_iperf_20210218 save_root_partition=zhangsan_local_for_iperf_20210219 ``` 2. scheduler return the ipxe_response to testbox @@ -118,7 +118,7 @@ Notes: #!ipxe dhcp initrd http://${http_server_ip}:${http_server_port}/os/openeuler/aarch64/20.03-iso-snapshots/${timestamp}/initrd.lkp - kernel http://${http_server_ip}:${http_server_port}/os/openeuler/aarch64/20.03-iso-snapshots/${timestamp}/boot/vmlinuz root=/dev/mapper/os-openeuler_aarch64_20.03 rootfs_src=${nfs_server_ip}:os/openeuler/aarch64/20.03-iso-snapshots/${timestamp} initrd=initrd.lkp ${kernel_append_root} + kernel http://${http_server_ip}:${http_server_port}/os/openeuler/aarch64/20.03-iso-snapshots/${timestamp}/boot/vmlinuz root=/dev/mapper/os-openeuler_aarch64_20.03 rootfs_src=${nfs_server_ip}:os/openeuler/aarch64/20.03-iso-snapshots/${timestamp} initrd=initrd.lkp ${kernel_custom_params} boot ``` -- 2.23.0
1 0
0 0
[PATCH compass-ci 2/2] lib/dump-stat.rb: refactor lkp-tests/sbin/dump-stat
by Lu Weitao 03 Mar '21

03 Mar '21
[Why] support handle hash from lkp-tests/stats/$script.rb Signed-off-by: Lu Weitao <luweitaobe(a)163.com> --- lib/dump_stat.rb | 194 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 194 insertions(+) create mode 100644 lib/dump_stat.rb diff --git a/lib/dump_stat.rb b/lib/dump_stat.rb new file mode 100644 index 0000000..86b2bd7 --- /dev/null +++ b/lib/dump_stat.rb @@ -0,0 +1,194 @@ +# SPDX-License-Identifier: GPL-2.0-only + +LKP_SRC ||= ENV['LKP_SRC'] || '/c/lkp-tests' + +require "#{LKP_SRC}/lib/statistics" +require "#{LKP_SRC}/lib/bounds" +require "#{LKP_SRC}/lib/yaml" +require "#{LKP_SRC}/lib/job" +require "#{LKP_SRC}/lib/string_ext" +require "#{LKP_SRC}/lib/log" +require 'set' + +UNSTRUCTURED_MONITORS = %w[ftrace].to_set + +def warn_stat(msg, monitor) + log_warn msg + log_warn "check #{RESULT_ROOT}/#{monitor}" +end + +# dump stat which input by lkp-tests/stats/$script return +# input: +# eg-1: +# { +# "pgfree" => [275506, 280018], +# ... +# } +# eg-2: +# { +# "iperf.tcp.sender.bps" => 34804801216.197174, +# "iperf.tcp.receiver.bps" => "34804762215.18231" +# } +module DumpStat + def self.dump_stat(monitor, stat_result) + @result = {} + @invalid_records = [] + @record_index = 0 + @monitor = monitor + + stat_result.each do |key, value| + key = key.resolve_invalid_bytes + next if key[0] == '#' + next if value.empty? || value == 0 + next if monitor =~ /^(dmesg|kmsg)$/ && key =~ /^(message|pattern):/ + + if key =~ /[ \t]/ + @invalid_records.push record_index + warn_stat "whitespace in stats name: #{key}", @monitor + return nil # for exit current stats/script dump-stat + end + next if assign_log_message(key, value) + + k = @monitor + '.' + key + @result[k] ||= [] + fill_zero(k) + if value.is_a?(String) + value = check_string_value(k, value, @monitor) + next unless value + return nil unless number?(value, @invalid_records) + + value = value.index('.') ? value.to_f : value.to_i + elsif value.is_a?(Array) + (0..value.size - 1).each do |i| + next unless value[i].is_a?(String) + + value[i] = check_string_value(k, value[i], @monitor) + next unless value[i] + return nil unless number?(value[i], @invalid_records) + + value[i] = value[i].index('.') ? value[i].to_f : value[i].to_i + valid_stats_verification(k, value[i]) + end + @result[k] = value + next + end + valid_stats_verification(k, value) + @result[k].push value + end + return nil if @result.empty? + + remove_zero_stats + delete_invalid_number(@result, @invalid_records, @monitor) + cols_verifation + return nil unless useful_result?(@result) + + save_json(@result, "#{RESULT_ROOT}/#{(a)monitor}.json", @result.size * @min_cols > 1000) + end + + # keep message | log line which key end with .message|.log + def self.assign_log_message(key, value) + if key.end_with?('.message', '.log') + k = @monitor + '.' + key + @result[k] = value + return true + end + + false + end + + def self.fill_zero(key) + size = @result[key].size + if @record_index < size + @record_index = size + elsif (@record_index - size).positive? + # fill 0 for missing values + @result[key].concat([0] * (@record_index - size)) + end + end + + def self.valid_stats_verification(key, value) + return nil if valid_stats_range? key, value + + @invalid_records.push @record_index + puts "outside valid range: #{value} in #{key} #{RESULT_ROOT}" + end + + def self.remove_zero_stats + @max_cols = 0 + @min_cols = Float::INFINITY + @min_cols_stat = '' + @max_cols_stat = '' + zero_stats = [] + @result.each do |key, val| + next if key.end_with?('.message', '.log') + + if @max_cols < val.size + @max_cols = val.size + @max_cols_stat = key + end + if @min_cols > val.size + @min_cols = val.size + @min_cols_stat = key + end + next if val[0] != 0 + next if val[-1] != 0 + next if val.sum != 0 + + zero_stats << key + end + zero_stats.each { |x| @result.delete x } + end + + def self.cols_verifation + return nil unless @min_cols < @max_cols && !UNSTRUCTURED_MONITORS.include?(monitor) + + if @min_cols == @max_cols - 1 + @result.each { |_k, y| y.pop if y.size == @max_cols } + puts "Last record seems incomplete. Truncated #{RESULT_ROOT}/#{(a)monitor}.json" + else + warn_stat "Not a matrix: value size is different - #{@min_cols_stat}: #{@min_cols} != #{@max_cols_stat}: #{@max_cols}: #{RESULT_ROOT}/#{(a)monitor}.json", @monitor + end + end +end + +def check_string_value(key, value, monitor) + # value terminator is expected. If not, throw out an error warning. + warn_stat "no line terminator in stats value: #{value}", monitor if value.chomp!.nil? + + value.strip! + if value.empty? + warn_stat "empty stat value of #{key}", monitor + return nil + end + + return value +end + +# only number is valid +def number?(value, invalid_records) + unless value.numeric? + invalid_records.push record_index + warn_stat "invalid stats value: #{value}", monitor + return nil + end + + true +end + +def delete_invalid_number(result, invalid_records, monitor) + return nil if monitor == 'ftrace' + + invalid_records.reverse_each do |index| + result.each do |_k, value| + value.delete_at index + end + end +end + +def useful_result?(result) + return nil if result.empty? + return nil if result.values[0].size.zero? + return nil if result.values[-1].size.zero? + + true +end -- 2.23.0
1 0
0 0
[PATCH compass-ci] service/lifecycle: add lifecycle
by Wu Zhende 03 Mar '21

03 Mar '21
Function: processing timeout jobs/machines processing crash jobs/machines Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/lib/lifecycle.cr | 310 +++++++++++++++++++++++++++++++++++++++++-- src/lifecycle.cr | 17 ++- 2 files changed, 311 insertions(+), 16 deletions(-) diff --git a/src/lib/lifecycle.cr b/src/lib/lifecycle.cr index af5cd07..8c52f11 100644 --- a/src/lib/lifecycle.cr +++ b/src/lib/lifecycle.cr @@ -1,19 +1,40 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +require "set" require "kemal" require "yaml" -require "./web_env" +require "./mq" +require "./scheduler_api" require "../scheduler/elasticsearch_client" +require "../lifecycle/constants" + +class String + def bigger_than?(time) + return false if self.empty? + + time = time.to_s + return true if time.empty? + + time = Time.parse(time, "%Y-%m-%dT%H:%M:%S", Time.local.location) + self_time = Time.parse(self, "%Y-%m-%dT%H:%M:%S", Time.local.location) + + self_time > time + end +end class Lifecycle property es - def initialize(env : HTTP::Server::Context) + def initialize + @mq = MQClient.instance @es = Elasticsearch::Client.new - @env = env - @log = env.log.as(JSONLogger) + @scheduler_api = SchedulerAPI.new + @log = JSONLogger.new + @jobs = Hash(String, JSON::Any).new + @machines = Hash(String, JSON::Any).new + @match = Hash(String, Set(String)).new {|h, k| h[k] = Set(String).new} end def alive(version) @@ -22,18 +43,287 @@ class Lifecycle @log.warn(e) end - def get_running_testbox - size = @env.params.query["size"]? || 20 - from = @env.params.query["from"]? || 0 + def init_from_es + jobs = get_active_jobs + jobs.each do |result| + job_id = result["_id"].to_s + job = result["_source"].as_h + job.delete_if{|key, _| !JOB_KEYWORDS.includes?(key)} + + @jobs[job_id] = JSON.parse(job.to_json) + @match[job["testbox"].to_s] << job_id + end + + machines = get_active_machines + machines.each do |result| + testbox = result["_id"].to_s + machine = result["_source"].as_h + machine.delete("history") + + machine = JSON.parse(machine.to_json) + @machines[testbox] = machine + + deal_match_job(testbox, machine["job_id"].to_s) + end + end + + def deal_match_job(testbox, job_id) + @match[testbox].each do |id| + next if id == job_id + + msg = { + "job_id" => id, + "job_state" => "occupied", + "testbox" => testbox + } + @mq.pushlish_confirm("job_mq", msg.to_json) + @match[testbox].delete(id) + end + end + + def get_active_jobs + query = { + "size" => 10000, + "query" => { + "term" => { + "job_state" => "boot" + } + } + } + @es.search("jobs", query) + end + + def get_active_machines query = { - "size" => size, - "from" => from, + "size" => 10000, "query" => { "terms" => { - "state" => ["booting", "running"] + "state" => ["booting", "running", "rebooting"] } } } @es.search("testbox", query) end + + def deal_job_events_from_mq + q = @mq.ch.queue("job_mq") + q.subscribe(no_ack: false) do |msg| + event = JSON.parse(msg.body_io.to_s) + job_state = event["job_state"]? + + case job_state + when "boot" + deal_boot_event(event) + when "close" + deal_close_event(event) + when "occupied" + deal_occupied_event(event) + else + deal_other_event(event) + end + @mq.ch.basic_ack(msg.delivery_tag) + end + end + + def deal_other_event(event) + event_job_id = event["job_id"].to_s + return if event_job_id.empty? + + update_cached_job(event_job_id, event) + + job = @jobs[event_job_id]? + return unless job + + testbox = job["testbox"].to_s + update_cached_machine(testbox, event) + end + + def update_cached_machine(testbox, event) + machine = @machines[testbox]? + return if machine && !event["time"].to_s.bigger_than?(machine["time"]?) + + update_es_machine_time(testbox, event) + end + + def update_es_machine_time(testbox, event) + machine = @es.get_tbox(testbox) + return unless machine + return unless event["time"].to_s.bigger_than?(machine["time"]?) + + machine.as_h.delete("history") + machine.as_h["time"] = event["time"] + machine.as_h["state"] = JSON::Any.new("booting") + @machines[testbox] = machine + @es.update_tbox(testbox, machine.as_h) + end + + def update_cached_job(job_id, event) + job = @jobs[job_id]? + if job + @jobs[job_id] = JSON.parse(job.as_h.merge!(event.as_h).to_json) + else + job = @es.get_job(job_id) + return unless job + return if JOB_CLOSE_STATE.includes?(job["job_state"]?) + + job = job.dump_to_json_any.as_h + job.delete_if{|key, _| !JOB_KEYWORDS.includes?(key)} + job["job_state"] = event["job_state"] + @jobs[job_id] = JSON.parse(job.to_json) + end + end + + def deal_occupied_event(event) + event_job_id = event["job_id"].to_s + return unless @jobs.has_key?(event_job_id) + + @jobs.delete(event_job_id) + spawn @scheduler_api.close_job(event_job_id, "occupied", "lifecycle") + end + + def deal_close_event(event) + event_job_id = event["job_id"].to_s + job = @jobs[event_job_id] + + return unless job + + @jobs.delete(event_job_id) + update_cached_machine(job["testbox"].to_s, event) + end + + def deal_boot_event(event) + event_job_id = event["job_id"]?.to_s + @jobs[event_job_id] = event unless event_job_id.empty? + machine = @machines[event["testbox"]]? + deal_boot_machine(machine, event) + end + + def deal_boot_machine(machine, event) + event_job_id = event["job_id"]?.to_s + if machine + machine_job_id = machine["job_id"].to_s + # The job is not updated + # No action is required + return if event_job_id == machine_job_id + + time = machine["time"]? + # Skip obsolete event + return unless event["time"].to_s.bigger_than?(time) + + @machines[event["testbox"].to_s] = event + deal_match_job(event["testbox"].to_s, event_job_id) + + # No previous job to process + return if machine_job_id.empty? + return unless @jobs.has_key?(machine_job_id) + + @jobs.delete(machine_job_id) + spawn @scheduler_api.close_job(machine_job_id, "occupied", "lifecycle") + else + @machines[event["testbox"].to_s] = event + end + end + + def max_time(times) + result = "" + times.each do |time| + result = time if time.to_s.bigger_than?(result) + end + return result + end + + def deal_timeout_job + dead_job_id = nil + loop do + close_job(dead_job_id, "timeout") if dead_job_id + deadline, dead_job_id = get_min_deadline + + # deal timeout job + next if dead_job_id && deadline <= Time.local + + sleep_until(deadline) + end + end + + def deal_timeout_machine + dead_machine_name = nil + loop do + reboot_timeout_machine(dead_machine_name) if dead_machine_name + deadline, dead_machine_name = get_min_deadline_machine + + next if dead_machine_name && deadline <= Time.local + + sleep_until(deadline) + end + end + + def sleep_until(deadline) + s = (deadline - Time.local).total_seconds + sleep(s) + end + + def get_min_deadline + deadline = (Time.local + 60.second) + dead_job_id = nil + @jobs.each do |id, job| + next unless job["deadline"]? + job_deadline = Time.parse(job["deadline"].to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return job_deadline, id if Time.local >= job_deadline + next unless deadline > job_deadline + + deadline = job_deadline + dead_job_id = id + end + return deadline, dead_job_id + end + + def get_min_deadline_machine + deadline = (Time.local + 60.second) + dead_machine_name = nil + @machines.each do |name, machine| + next if machine["deadline"]?.to_s.empty? + machine_deadline = Time.parse(machine["deadline"].to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return machine_deadline, name if Time.local >= machine_deadline + next unless deadline > machine_deadline + + deadline = machine_deadline + dead_machine_name = name + end + return deadline, dead_machine_name + end + + def close_job(job_id, reason) + @jobs.delete(job_id) + spawn @scheduler_api.close_job(job_id, reason, "lifecycle") + end + + def reboot_timeout_machine(testbox) + @machines.delete(testbox) + machine = @es.get_tbox(testbox) + + return unless machine + return if MACHINE_CLOSE_STATE.includes?(machine["state"]) + + deadline = machine["deadline"]? + return unless deadline + + deadline = Time.parse(deadline.to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return if Time.local < deadline + + mq_queue = get_machine_reboot_queue(testbox) + @mq.pushlish_confirm(mq_queue, machine.to_json) + + machine["state"] = "rebooting_queue" + machine["time"] = Time.local.to_s("%Y-%m-%dT%H:%M:%S+0800") + @es.update_tbox(testbox, machine.as_h) + end + + def get_machine_reboot_queue(testbox) + if testbox.includes?(".") + testbox =~ /.*\.(.*)-\d+$/ + else + testbox =~ /(.*)--.*/ + end + $1 + end end diff --git a/src/lifecycle.cr b/src/lifecycle.cr index a864621..f73cef5 100644 --- a/src/lifecycle.cr +++ b/src/lifecycle.cr @@ -2,15 +2,20 @@ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. require "lifecycle/lifecycle" -require "./lifecycle/constants.cr" +require "./lifecycle/constants" require "./lib/json_logger" +require "./lib/lifecycle" module Cycle log = JSONLogger.new + lifecycle = Lifecycle.new - begin - Kemal.run(LIFECYCLE_PORT) - rescue e - log.error(e) - end + # init @jobs and @machines + lifecycle.init_from_es + lifecycle.deal_job_events_from_mq + + spawn lifecycle.deal_timeout_job + spawn lifecycle.deal_timeout_machine + + Kemal.run(LIFECYCLE_PORT) end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] container/submit: add attach directory
by Luan Shengde 03 Mar '21

03 Mar '21
add attach lkp-tests to the container when run the container to submit jobs. [why] enable user edit code or job file(s) to submit jobs. user can edit the code or job file file according to their requires. Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/submit/submit | 1 + 1 file changed, 1 insertion(+) diff --git a/container/submit/submit b/container/submit/submit index 6236892..ada05f9 100755 --- a/container/submit/submit +++ b/container/submit/submit @@ -13,6 +13,7 @@ cmd=( --name=submit-$USER-$data_suffix -it -v /etc/compass-ci:/etc/compass-ci:ro + -v $LKP_SRC:/root/lkp-tests:ro -v $HOME/.config:/root/.config:ro -v $HOME/.ssh:/root/.ssh:rw submit -- 2.23.0
1 0
0 0
[PATCH compass-ci] scheduler/elasticsearch_client.cr: skip requesting state
by Wu Zhende 03 Mar '21

03 Mar '21
When testbox is updated, the current status is saved to the history. The "requesting" state should be skipped, because job_id is empty. It's useless to store this information. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/scheduler/elasticsearch_client.cr | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/scheduler/elasticsearch_client.cr b/src/scheduler/elasticsearch_client.cr index e343809..91fb84a 100644 --- a/src/scheduler/elasticsearch_client.cr +++ b/src/scheduler/elasticsearch_client.cr @@ -105,7 +105,7 @@ class Elasticsearch::Client end history ||= [] of JSON::Any - history << JSON.parse(wtmp_hash.to_json) + history << JSON.parse(wtmp_hash.to_json) unless wtmp_hash["state"]?.to_s == "requesting" history = JSON.parse(history.to_json) body = { "history" => history} -- 2.23.0
1 0
0 0
[PATCH v4 compass-ci] doc/manual: add document for submit container
by Luan Shengde 03 Mar '21

03 Mar '21
Signed-off-by: Luan Shengde <shdluan(a)163.com> --- doc/manual/build-lkp-test-container.en.md | 61 +++++++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 doc/manual/build-lkp-test-container.en.md diff --git a/doc/manual/build-lkp-test-container.en.md b/doc/manual/build-lkp-test-container.en.md new file mode 100644 index 0000000..b4f06bd --- /dev/null +++ b/doc/manual/build-lkp-test-container.en.md @@ -0,0 +1,61 @@ +# Preface + +We provide a docker container to suit various of Linux OS(es). +In this case you do not need to install the lkp-tests to your local server. +Also you can avoid installation failures from undesired dependency package(s). + +# Prepare + +- install docker +- apply account and config default yaml +- generate ssh key(s) + +# build container + +## 1. download resource + + Download lkp-tests and compass-ci to your local server. + + Command(s): + + git clone https://gitee.com/wu_fengguang/compass-ci.git + git clone https://gitee.com/wu_fengguang/lkp-tests.git + +## 2. add environment variable(s) + + Command(s): + + echo "export LKP_SRC=$PWD/lkp-tests" >> ~/.${SHELL##*/}rc + echo "export CCI_SRC=$PWD/compass-ci" >> ~/.${SHELL##*/}rc + source ~/.${SHELL##*/}rc + +## 3. build docker image + + Command(s): + + cd compass-ci/container/submit + ./build + +## 4. add executable file + + Command(s): + + ln -s $CCI_SRC/container/submit/submit /usr/bin/submit + +# try it + + instruction: + + You can directly use the command 'submit' to submit jobs. + It is the same as you install the lkp-tests on your own server. + It will start a disposable container to submit your job. + The container will attach the directory lkp-test to the container itself. + You can edit the job yaml(s) in lkp-test/jobs and it will take effect when you submit jobs. + + Example: + + submit -c -m testbox=vm-2p8g borrow-1h.yaml + + About submit: + + For detailed usage for command submit, please reference to: [submit user manual](https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/manual/su… -- 2.23.0
1 0
0 0
[PATCH compass-ci] container/submit: add attach directory
by Luan Shengde 03 Mar '21

03 Mar '21
add attach lkp-tests/hosts to the container when run the container [why] enable user submit jobs with custom job files. user can edit the job files according their requires, and the job file will take effect when use submit contanier to submit jobs. Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/submit/submit | 1 + 1 file changed, 1 insertion(+) diff --git a/container/submit/submit b/container/submit/submit index 6236892..ca6eb72 100755 --- a/container/submit/submit +++ b/container/submit/submit @@ -13,6 +13,7 @@ cmd=( --name=submit-$USER-$data_suffix -it -v /etc/compass-ci:/etc/compass-ci:ro + -v $LKP_SRC/jobs:/root/lkp-tests/jobs:ro -v $HOME/.config:/root/.config:ro -v $HOME/.ssh:/root/.ssh:rw submit -- 2.23.0
2 2
0 0
[PATCH compass-ci 2/4] sparrow/0-package/read-config: export config yaml info
by Liu Yinsi 03 Mar '21

03 Mar '21
export my_email, my_name, server_ip to deploy compass-ci. Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- sparrow/0-package/read-config | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100755 sparrow/0-package/read-config diff --git a/sparrow/0-package/read-config b/sparrow/0-package/read-config new file mode 100755 index 0000000..9da6ffc --- /dev/null +++ b/sparrow/0-package/read-config @@ -0,0 +1,14 @@ +#!/bin/bash +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# export config info setup.yaml server_ip, my_email, my_name. + +mkdir -p /etc/compass-ci/ +cp -a $CCI_SRC/sparrow/setup.yaml /etc/compass-ci/setup.yaml + +options=( server_ip my_name my_email ) + +for option in ${options[@]} +do + export $option=$(grep "^$option:" /etc/compass-ci/setup.yaml |awk -F ": " '{print $2}') +done -- 2.23.0
2 2
0 0
  • ← Newer
  • 1
  • ...
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • ...
  • 523
  • Older →

HyperKitty Powered by HyperKitty