mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 5233 discussions
[PATCH v3 compass-ci 06/11] mail-robot: build success email
by Luan Shengde 09 Nov '20

09 Nov '20
build email message for successfully assigning account Signed-off-by: Luan Shengde <shdluan(a)163.com> --- lib/assign-account-email.rb | 52 +++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100755 lib/assign-account-email.rb diff --git a/lib/assign-account-email.rb b/lib/assign-account-email.rb new file mode 100755 index 0000000..3fe95eb --- /dev/null +++ b/lib/assign-account-email.rb @@ -0,0 +1,52 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +def build_apply_account_email(my_info) + email_msg = <<~EMAIL_MESSAGE + To: #{my_info['my_email']} + Subject: [compass-ci] Account Ready + + Dear #{my_info['my_name']}, + + Thank you for joining us. + + ONE-TIME SETUP + + You can use the following info to submit jobs: + + 1) setup default config + run the following command to add the below setup to default config file + + mkdir -p ~/.config/compass-ci/defaults/ + cat >> ~/.config/compass-ci/defaults/account.yaml <<-EOF + my_email: #{my_info['my_email']} + my_name: #{my_info['my_name']} + my_uuid: #{my_info['my_uuid']} + EOF + + 2) download lkp-tests and dependencies + run the following command to install the lkp-test and effect the configuration + + git clone https://gitee.com/wu_fengguang/lkp-tests.git + cd lkp-tests + make install + source ${HOME}/.bashrc && source ${HOME}/.bash_profile + + 3) submit job + reference: https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/tutorial.md + + reference to 'how to write job yaml' section to write the job yaml + you can alse reference to files in compass-ci/jobs as example. + + submit jobs, for example: + + submit jobs/iperf.yaml textbox=vm-2p8g + + regards + compass-ci + EMAIL_MESSAGE + + return email_msg +end -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci 05/11] mail-robot: apply account for user
by Luan Shengde 09 Nov '20

09 Nov '20
apply account with my_info and my_ssh_pubkey my_info: - my_email - my_name - my_uuid check account exists Signed-off-by: Luan Shengde <shdluan(a)163.com> --- lib/apply-jumper-account.rb | 40 +++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100755 lib/apply-jumper-account.rb diff --git a/lib/apply-jumper-account.rb b/lib/apply-jumper-account.rb new file mode 100755 index 0000000..7e82bb0 --- /dev/null +++ b/lib/apply-jumper-account.rb @@ -0,0 +1,40 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require 'json' + +# apply jumper account for user +# the enter point is apply_jumper_account +# the apply_account_info hash need include +# my_email, my_name, my_uuid, my_ssh_pubkey +class ApplyJumperAccount + def initialize(my_info, my_ssh_pubkey) + @jumper_host = JUMPER_HOST + @jumper_port = JUMPER_PORT + @my_info = my_info.clone + @my_ssh_pubkey = my_ssh_pubkey + end + + def apply_jumper_account + @my_info['my_ssh_pubkey'] = @my_ssh_pubkey + + account_info_str = %x(curl -XGET "#{@jumper_host}:#{@jumper_port}/assign_account" \ + -d '#{(a)my_info.to_json}') + account_info = JSON.parse account_info_str + + account_info_exist(account_info) + + return account_info + end + + def account_info_exist(account_info) + return unless account_info['my_login_name'].nil? + + error_message = ' No more available jumper account.' + error_message += ' You may try again later or consulting the managet for a solution.' + + raise error_message + end +end -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci 01/11] mail-robot: entry point file for mail-robot service
by Luan Shengde 09 Nov '20

09 Nov '20
call mail-robot.rb parse-apply-account-email.rb assign-account.rb apply-jumper-account.rb assign-account-email.rb assign-account-fail-eamil.rb to run the mail-robot service Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/mail-robot/run-mail-robot.rb | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100755 container/mail-robot/run-mail-robot.rb diff --git a/container/mail-robot/run-mail-robot.rb b/container/mail-robot/run-mail-robot.rb new file mode 100755 index 0000000..6cca2d2 --- /dev/null +++ b/container/mail-robot/run-mail-robot.rb @@ -0,0 +1,19 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require_relative '../../lib/assign-account-email' +require_relative '../../lib/assign-account-fail-eamil' +require_relative '../../lib/apply-jumper-account' +require_relative '../../lib/parse-apply-account-email' +require_relative '../../lib/assign-account' +require_relative '../../lib/mail-robot' + +MAILDIR = '/srv/cci/Maildir/.compass-ci' + +JUMPER_HOST = ENV['JUMPER_HOST'] || 'api.compass-ci.openeuler.org' +JUMPER_PORT = ENV['JUMPER_PORT'] || 29999 +SEND_MAIL_PORT = ENV['SEND_MAIL_PORT'] || 49000 + +monitor_new_email("#{MAILDIR}/new", "#{MAILDIR}/cur") -- 2.23.0
1 0
0 0
[PATCH compass-ci] sched: refactor sched class for the lkp cluster sync
by Ren Wen 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract request_cluster_state function from sched.cr to request_cluster_state.cr. Signed-off-by: Ren Wen <15991987063(a)163.com> --- src/lib/sched.cr | 108 +----------------------- src/scheduler/request_cluster_state.cr | 111 +++++++++++++++++++++++++ 2 files changed, 112 insertions(+), 107 deletions(-) create mode 100644 src/scheduler/request_cluster_state.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..077b071 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/request_cluster_state" class Sched property es @@ -49,113 +50,6 @@ class Sched @redis.hash_del("sched/host2queues", hostname) end - # return: - # Hash(String, Hash(String, String)) - def get_cluster_state(cluster_id) - cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) - if cluster_state - cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) - else - cluster_state = Hash(String, Hash(String, String)).new - end - return cluster_state - end - - # Update job info according to cluster id. - def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) - cluster_state = get_cluster_state(cluster_id) - if cluster_state[job_id]? - cluster_state[job_id].merge!(job_info) - @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) - end - end - - # Return response according to different request states. - # all request states: - # wait_ready | abort | failed | finished | wait_finish | - # write_state | roles_ip - def request_cluster_state(env) - request_state = env.params.query["state"] - job_id = env.params.query["job_id"] - cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! - cluster_state = "" - - states = {"abort" => "abort", - "finished" => "finish", - "failed" => "abort", - "wait_ready" => "ready", - "wait_finish" => "finish"} - - case request_state - when "abort", "finished", "failed" - # update node state only - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - when "wait_ready" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - @block_helper.block_until_finished(cluster_id) { - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - cluster_state == "ready" || cluster_state == "abort" - } - - return cluster_state - when "wait_finish" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - while 1 - sleep(10) - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - break if (cluster_state == "finish" || cluster_state == "abort") - end - - return cluster_state - when "write_state" - node_roles = env.params.query["node_roles"] - node_ip = env.params.query["ip"] - direct_ips = env.params.query["direct_ips"] - direct_macs = env.params.query["direct_macs"] - - job_info = {"roles" => node_roles, - "ip" => node_ip, - "direct_ips" => direct_ips, - "direct_macs" => direct_macs} - update_cluster_state(cluster_id, job_id, job_info) - when "roles_ip" - role = "server" - role_state = get_role_state(cluster_id, role) - raise "Missing #{role} state in cluster state" unless role_state - return "server=#{role_state["ip"]}\n" \ - "direct_server_ips=#{role_state["direct_ips"]}" - end - - # show cluster state - return @redis.hash_get("sched/cluster_state", cluster_id) - end - - # get the node state of role from cluster_state - private def get_role_state(cluster_id, role) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |role_state| - return role_state if role_state["roles"] == role - end - end - - # node_state: "finish" | "ready" - def sync_cluster_state(cluster_id, job_id, node_state) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |host_state| - state = host_state["state"] - return "abort" if state == "abort" - end - - cluster_state.each_value do |host_state| - state = host_state["state"] - next if "#{state}" == "#{node_state}" - return "retry" - end - - # cluster state is node state when all nodes are normal - return node_state - end - # get cluster config using own lkp_src cluster file, # a hash type will be returned def get_cluster_config(cluster_file, lkp_initrd_user, os_arch) diff --git a/src/scheduler/request_cluster_state.cr b/src/scheduler/request_cluster_state.cr new file mode 100644 index 0000000..07ba6fd --- /dev/null +++ b/src/scheduler/request_cluster_state.cr @@ -0,0 +1,111 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + # return: + # Hash(String, Hash(String, String)) + def get_cluster_state(cluster_id) + cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) + if cluster_state + cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) + else + cluster_state = Hash(String, Hash(String, String)).new + end + return cluster_state + end + + # Update job info according to cluster id. + def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) + cluster_state = get_cluster_state(cluster_id) + if cluster_state[job_id]? + cluster_state[job_id].merge!(job_info) + @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) + end + end + + # Return response according to different request states. + # all request states: + # wait_ready | abort | failed | finished | wait_finish | + # write_state | roles_ip + def request_cluster_state(env) + request_state = env.params.query["state"] + job_id = env.params.query["job_id"] + cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! + cluster_state = "" + + states = {"abort" => "abort", + "finished" => "finish", + "failed" => "abort", + "wait_ready" => "ready", + "wait_finish" => "finish"} + + case request_state + when "abort", "finished", "failed" + # update node state only + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + when "wait_ready" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + @block_helper.block_until_finished(cluster_id) { + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + cluster_state == "ready" || cluster_state == "abort" + } + + return cluster_state + when "wait_finish" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + while 1 + sleep(10) + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + break if (cluster_state == "finish" || cluster_state == "abort") + end + + return cluster_state + when "write_state" + node_roles = env.params.query["node_roles"] + node_ip = env.params.query["ip"] + direct_ips = env.params.query["direct_ips"] + direct_macs = env.params.query["direct_macs"] + + job_info = {"roles" => node_roles, + "ip" => node_ip, + "direct_ips" => direct_ips, + "direct_macs" => direct_macs} + update_cluster_state(cluster_id, job_id, job_info) + when "roles_ip" + role = "server" + role_state = get_role_state(cluster_id, role) + raise "Missing #{role} state in cluster state" unless role_state + return "server=#{role_state["ip"]}\n" \ + "direct_server_ips=#{role_state["direct_ips"]}" + end + + # show cluster state + return @redis.hash_get("sched/cluster_state", cluster_id) + end + + # get the node state of role from cluster_state + private def get_role_state(cluster_id, role) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |role_state| + return role_state if role_state["roles"] == role + end + end + + # node_state: "finish" | "ready" + def sync_cluster_state(cluster_id, job_id, node_state) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |host_state| + state = host_state["state"] + return "abort" if state == "abort" + end + + cluster_state.each_value do |host_state| + state = host_state["state"] + next if "#{state}" == "#{node_state}" + return "retry" + end + + # cluster state is node state when all nodes are normal + return node_state + end +end -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci] sched: refactor sched class for the lkp cluster sync
by Ren Wen 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract request_cluster_state function from sched.cr to request_cluster_state.cr Signed-off-by: Ren Wen <15991987063(a)163.com> --- src/lib/sched.cr | 108 +----------------------- src/scheduler/request_cluster_state.cr | 111 +++++++++++++++++++++++++ 2 files changed, 112 insertions(+), 107 deletions(-) create mode 100644 src/scheduler/request_cluster_state.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 6aba6cd..1b6eb8a 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -16,6 +16,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" require "../scheduler/close_job" +require "../scheduler/request_cluster_state" class Sched property es @@ -50,113 +51,6 @@ class Sched @redis.hash_del("sched/host2queues", hostname) end - # return: - # Hash(String, Hash(String, String)) - def get_cluster_state(cluster_id) - cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) - if cluster_state - cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) - else - cluster_state = Hash(String, Hash(String, String)).new - end - return cluster_state - end - - # Update job info according to cluster id. - def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) - cluster_state = get_cluster_state(cluster_id) - if cluster_state[job_id]? - cluster_state[job_id].merge!(job_info) - @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) - end - end - - # Return response according to different request states. - # all request states: - # wait_ready | abort | failed | finished | wait_finish | - # write_state | roles_ip - def request_cluster_state(env) - request_state = env.params.query["state"] - job_id = env.params.query["job_id"] - cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! - cluster_state = "" - - states = {"abort" => "abort", - "finished" => "finish", - "failed" => "abort", - "wait_ready" => "ready", - "wait_finish" => "finish"} - - case request_state - when "abort", "finished", "failed" - # update node state only - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - when "wait_ready" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - @block_helper.block_until_finished(cluster_id) { - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - cluster_state == "ready" || cluster_state == "abort" - } - - return cluster_state - when "wait_finish" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - while 1 - sleep(10) - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - break if (cluster_state == "finish" || cluster_state == "abort") - end - - return cluster_state - when "write_state" - node_roles = env.params.query["node_roles"] - node_ip = env.params.query["ip"] - direct_ips = env.params.query["direct_ips"] - direct_macs = env.params.query["direct_macs"] - - job_info = {"roles" => node_roles, - "ip" => node_ip, - "direct_ips" => direct_ips, - "direct_macs" => direct_macs} - update_cluster_state(cluster_id, job_id, job_info) - when "roles_ip" - role = "server" - role_state = get_role_state(cluster_id, role) - raise "Missing #{role} state in cluster state" unless role_state - return "server=#{role_state["ip"]}\n" \ - "direct_server_ips=#{role_state["direct_ips"]}" - end - - # show cluster state - return @redis.hash_get("sched/cluster_state", cluster_id) - end - - # get the node state of role from cluster_state - private def get_role_state(cluster_id, role) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |role_state| - return role_state if role_state["roles"] == role - end - end - - # node_state: "finish" | "ready" - def sync_cluster_state(cluster_id, job_id, node_state) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |host_state| - state = host_state["state"] - return "abort" if state == "abort" - end - - cluster_state.each_value do |host_state| - state = host_state["state"] - next if "#{state}" == "#{node_state}" - return "retry" - end - - # cluster state is node state when all nodes are normal - return node_state - end - # get cluster config using own lkp_src cluster file, # a hash type will be returned def get_cluster_config(cluster_file, lkp_initrd_user, os_arch) diff --git a/src/scheduler/request_cluster_state.cr b/src/scheduler/request_cluster_state.cr new file mode 100644 index 0000000..ac6cb8e --- /dev/null +++ b/src/scheduler/request_cluster_state.cr @@ -0,0 +1,111 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + # Return response according to different request states. + # all request states: + # wait_ready | abort | failed | finished | wait_finish | + # write_state | roles_ip + def request_cluster_state(env) + request_state = env.params.query["state"] + job_id = env.params.query["job_id"] + cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! + cluster_state = "" + + states = {"abort" => "abort", + "finished" => "finish", + "failed" => "abort", + "wait_ready" => "ready", + "wait_finish" => "finish"} + + case request_state + when "abort", "finished", "failed" + # update node state only + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + when "wait_ready" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + @block_helper.block_until_finished(cluster_id) { + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + cluster_state == "ready" || cluster_state == "abort" + } + + return cluster_state + when "wait_finish" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + while 1 + sleep(10) + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + break if (cluster_state == "finish" || cluster_state == "abort") + end + + return cluster_state + when "write_state" + node_roles = env.params.query["node_roles"] + node_ip = env.params.query["ip"] + direct_ips = env.params.query["direct_ips"] + direct_macs = env.params.query["direct_macs"] + + job_info = {"roles" => node_roles, + "ip" => node_ip, + "direct_ips" => direct_ips, + "direct_macs" => direct_macs} + update_cluster_state(cluster_id, job_id, job_info) + when "roles_ip" + role = "server" + role_state = get_role_state(cluster_id, role) + raise "Missing #{role} state in cluster state" unless role_state + return "server=#{role_state["ip"]}\n" \ + "direct_server_ips=#{role_state["direct_ips"]}" + end + + # show cluster state + return @redis.hash_get("sched/cluster_state", cluster_id) + end + + # node_state: "finish" | "ready" + def sync_cluster_state(cluster_id, job_id, node_state) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |host_state| + state = host_state["state"] + return "abort" if state == "abort" + end + + cluster_state.each_value do |host_state| + state = host_state["state"] + next if "#{state}" == "#{node_state}" + return "retry" + end + + # cluster state is node state when all nodes are normal + return node_state + end + + # return: + # Hash(String, Hash(String, String)) + def get_cluster_state(cluster_id) + cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) + if cluster_state + cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) + else + cluster_state = Hash(String, Hash(String, String)).new + end + return cluster_state + end + + # Update job info according to cluster id. + def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) + cluster_state = get_cluster_state(cluster_id) + if cluster_state[job_id]? + cluster_state[job_id].merge!(job_info) + @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) + end + end + + # get the node state of role from cluster_state + private def get_role_state(cluster_id, role) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |role_state| + return role_state if role_state["roles"] == role + end + end +end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] monitoring/filter.cr: query key support regular match
by Wu Zhende 09 Nov '20

09 Nov '20
[Why] Can use regular match in query's keys. [Example] 1: monitor job.*= query=>{"job.*":null} if one log's key include "job", will be matched. 2: monitor .*= this can match all logs Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/monitoring/filter.cr | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/src/monitoring/filter.cr b/src/monitoring/filter.cr index 88702b6..8e672e0 100644 --- a/src/monitoring/filter.cr +++ b/src/monitoring/filter.cr @@ -56,7 +56,8 @@ class Filter def match_query(query : Hash(String, JSON::Any), msg : Hash(String, JSON::Any)) query.each do |key, value| - return false unless msg.has_key?(key) + key = find_real_key(key, msg.keys) unless msg.has_key?(key) + return false unless key values = value.as_a next if values.includes?(nil) || values.includes?(msg[key]?) @@ -66,6 +67,12 @@ class Filter return true end + private def find_real_key(rule, keys) + keys.each do |key| + return key if key.to_s =~ /#{rule}/ + end + end + private def regular_match(rules, string) rules.each do |rule| return true if string =~ /#{rule}/ -- 2.23.0
2 1
0 0
[PATCH v8 lkp-tests 1/2] jobs/iozone-bs.yaml: combine multiple test parameter to single
by Lu Kaiyi 09 Nov '20

09 Nov '20
[why] avoid explosion of parameter for iozone-bs.yaml [how] combine multiple test parameter to single Signed-off-by: Lu Kaiyi <2392863668(a)qq.com> --- jobs/iozone-bs.yaml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/jobs/iozone-bs.yaml b/jobs/iozone-bs.yaml index e2cd9f48..53f1ac46 100644 --- a/jobs/iozone-bs.yaml +++ b/jobs/iozone-bs.yaml @@ -2,9 +2,7 @@ suite: iozone category: benchmark file_size: 4g -write_rewrite: true -read_reread: true -random_read_write: true +test: write, read, rand_rw block_size: - 64k -- 2.23.0
1 0
0 0
[PATCH lkp-tests] lib/monitor: query key support regular match
by Wu Zhende 09 Nov '20

09 Nov '20
[Why] Enables more flexible monitoring conditions. When I use "monitor job.*=", will get "query=>{"job":{"*":null}}". It's not what I want. I want "query=>{"job.*": null}". Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- lib/monitor.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/monitor.rb b/lib/monitor.rb index 56283e10..67c2389a 100755 --- a/lib/monitor.rb +++ b/lib/monitor.rb @@ -51,7 +51,7 @@ class Monitor def merge_overrides return if @overrides.empty? - revise_hash(@query, @overrides, true) + @query.merge!(@overrides) end def field_check -- 2.23.0
2 1
0 0
[PATCH v1 compass-ci] sched: refactor sched class for the close job function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract close_job function from sched.cr to close_job.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 28 +--------------------------- src/scheduler/close_job.cr | 31 +++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 27 deletions(-) create mode 100644 src/scheduler/close_job.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..6aba6cd 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/close_job" class Sched property es @@ -404,33 +405,6 @@ class Sched @redis.hash_set("sched/tbox2ssh_port", testbox, ssh_port) end - def delete_access_key_file(job : Job) - File.delete(job.access_key_file) if File.exists?(job.access_key_file) - end - - def close_job(job_id : String) - job = @redis.get_job(job_id) - - delete_access_key_file(job) if job - - response = @es.set_job_content(job) - if response["_id"] == nil - # es update fail, raise exception - raise "es set job content fail!" - end - - response = @task_queue.hand_over_task( - "sched/#{job.queue}", "extract_stats", job_id - ) - if response[0] != 201 - raise "#{response}" - end - - @redis.remove_finished_job(job_id) - - return %({"job_id": "#{job_id}", "job_state": "complete"}) - end - private def query_consumable_keys(shortest_queue_name) keys = [] of String search = "sched/" + shortest_queue_name + "*" diff --git a/src/scheduler/close_job.cr b/src/scheduler/close_job.cr new file mode 100644 index 0000000..d071d69 --- /dev/null +++ b/src/scheduler/close_job.cr @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def close_job(job_id : String) + job = @redis.get_job(job_id) + + delete_access_key_file(job) if job + + response = @es.set_job_content(job) + if response["_id"] == nil + # es update fail, raise exception + raise "es set job content fail!" + end + + response = @task_queue.hand_over_task( + "sched/#{job.queue}", "extract_stats", job_id + ) + if response[0] != 201 + raise "#{response}" + end + + @redis.remove_finished_job(job_id) + + return %({"job_id": "#{job_id}", "job_state": "complete"}) + end + + def delete_access_key_file(job : Job) + File.delete(job.access_key_file) if File.exists?(job.access_key_file) + end +end -- 2.23.0
2 1
0 0
[PATCH compass-ci] sched: refactor sched class for the close job function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract close_job function from sched.cr to close_job.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 28 +--------------------------- src/scheduler/close_job.cr | 31 +++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 27 deletions(-) create mode 100644 src/scheduler/close_job.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..6aba6cd 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/close_job" class Sched property es @@ -404,33 +405,6 @@ class Sched @redis.hash_set("sched/tbox2ssh_port", testbox, ssh_port) end - def delete_access_key_file(job : Job) - File.delete(job.access_key_file) if File.exists?(job.access_key_file) - end - - def close_job(job_id : String) - job = @redis.get_job(job_id) - - delete_access_key_file(job) if job - - response = @es.set_job_content(job) - if response["_id"] == nil - # es update fail, raise exception - raise "es set job content fail!" - end - - response = @task_queue.hand_over_task( - "sched/#{job.queue}", "extract_stats", job_id - ) - if response[0] != 201 - raise "#{response}" - end - - @redis.remove_finished_job(job_id) - - return %({"job_id": "#{job_id}", "job_state": "complete"}) - end - private def query_consumable_keys(shortest_queue_name) keys = [] of String search = "sched/" + shortest_queue_name + "*" diff --git a/src/scheduler/close_job.cr b/src/scheduler/close_job.cr new file mode 100644 index 0000000..d071d69 --- /dev/null +++ b/src/scheduler/close_job.cr @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def close_job(job_id : String) + job = @redis.get_job(job_id) + + delete_access_key_file(job) if job + + response = @es.set_job_content(job) + if response["_id"] == nil + # es update fail, raise exception + raise "es set job content fail!" + end + + response = @task_queue.hand_over_task( + "sched/#{job.queue}", "extract_stats", job_id + ) + if response[0] != 201 + raise "#{response}" + end + + @redis.remove_finished_job(job_id) + + return %({"job_id": "#{job_id}", "job_state": "complete"}) + end + + def delete_access_key_file(job : Job) + File.delete(job.access_key_file) if File.exists?(job.access_key_file) + end +end -- 2.23.0
1 1
0 0
  • ← Newer
  • 1
  • ...
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • ...
  • 524
  • Older →

HyperKitty Powered by HyperKitty