mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 5233 discussions
[PATCH v3 compass-ci 06/11] mail-robot: build success email
by Luan Shengde 09 Nov '20

09 Nov '20
build email message for successfully assigning account Signed-off-by: Luan Shengde <shdluan(a)163.com> --- lib/assign-account-email.rb | 52 +++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100755 lib/assign-account-email.rb diff --git a/lib/assign-account-email.rb b/lib/assign-account-email.rb new file mode 100755 index 0000000..3fe95eb --- /dev/null +++ b/lib/assign-account-email.rb @@ -0,0 +1,52 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +def build_apply_account_email(my_info) + email_msg = <<~EMAIL_MESSAGE + To: #{my_info['my_email']} + Subject: [compass-ci] Account Ready + + Dear #{my_info['my_name']}, + + Thank you for joining us. + + ONE-TIME SETUP + + You can use the following info to submit jobs: + + 1) setup default config + run the following command to add the below setup to default config file + + mkdir -p ~/.config/compass-ci/defaults/ + cat >> ~/.config/compass-ci/defaults/account.yaml <<-EOF + my_email: #{my_info['my_email']} + my_name: #{my_info['my_name']} + my_uuid: #{my_info['my_uuid']} + EOF + + 2) download lkp-tests and dependencies + run the following command to install the lkp-test and effect the configuration + + git clone https://gitee.com/wu_fengguang/lkp-tests.git + cd lkp-tests + make install + source ${HOME}/.bashrc && source ${HOME}/.bash_profile + + 3) submit job + reference: https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/tutorial.md + + reference to 'how to write job yaml' section to write the job yaml + you can alse reference to files in compass-ci/jobs as example. + + submit jobs, for example: + + submit jobs/iperf.yaml textbox=vm-2p8g + + regards + compass-ci + EMAIL_MESSAGE + + return email_msg +end -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci 05/11] mail-robot: apply account for user
by Luan Shengde 09 Nov '20

09 Nov '20
apply account with my_info and my_ssh_pubkey my_info: - my_email - my_name - my_uuid check account exists Signed-off-by: Luan Shengde <shdluan(a)163.com> --- lib/apply-jumper-account.rb | 40 +++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100755 lib/apply-jumper-account.rb diff --git a/lib/apply-jumper-account.rb b/lib/apply-jumper-account.rb new file mode 100755 index 0000000..7e82bb0 --- /dev/null +++ b/lib/apply-jumper-account.rb @@ -0,0 +1,40 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require 'json' + +# apply jumper account for user +# the enter point is apply_jumper_account +# the apply_account_info hash need include +# my_email, my_name, my_uuid, my_ssh_pubkey +class ApplyJumperAccount + def initialize(my_info, my_ssh_pubkey) + @jumper_host = JUMPER_HOST + @jumper_port = JUMPER_PORT + @my_info = my_info.clone + @my_ssh_pubkey = my_ssh_pubkey + end + + def apply_jumper_account + @my_info['my_ssh_pubkey'] = @my_ssh_pubkey + + account_info_str = %x(curl -XGET "#{@jumper_host}:#{@jumper_port}/assign_account" \ + -d '#{(a)my_info.to_json}') + account_info = JSON.parse account_info_str + + account_info_exist(account_info) + + return account_info + end + + def account_info_exist(account_info) + return unless account_info['my_login_name'].nil? + + error_message = ' No more available jumper account.' + error_message += ' You may try again later or consulting the managet for a solution.' + + raise error_message + end +end -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci 01/11] mail-robot: entry point file for mail-robot service
by Luan Shengde 09 Nov '20

09 Nov '20
call mail-robot.rb parse-apply-account-email.rb assign-account.rb apply-jumper-account.rb assign-account-email.rb assign-account-fail-eamil.rb to run the mail-robot service Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/mail-robot/run-mail-robot.rb | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100755 container/mail-robot/run-mail-robot.rb diff --git a/container/mail-robot/run-mail-robot.rb b/container/mail-robot/run-mail-robot.rb new file mode 100755 index 0000000..6cca2d2 --- /dev/null +++ b/container/mail-robot/run-mail-robot.rb @@ -0,0 +1,19 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require_relative '../../lib/assign-account-email' +require_relative '../../lib/assign-account-fail-eamil' +require_relative '../../lib/apply-jumper-account' +require_relative '../../lib/parse-apply-account-email' +require_relative '../../lib/assign-account' +require_relative '../../lib/mail-robot' + +MAILDIR = '/srv/cci/Maildir/.compass-ci' + +JUMPER_HOST = ENV['JUMPER_HOST'] || 'api.compass-ci.openeuler.org' +JUMPER_PORT = ENV['JUMPER_PORT'] || 29999 +SEND_MAIL_PORT = ENV['SEND_MAIL_PORT'] || 49000 + +monitor_new_email("#{MAILDIR}/new", "#{MAILDIR}/cur") -- 2.23.0
1 0
0 0
[PATCH compass-ci] sched: refactor sched class for the lkp cluster sync
by Ren Wen 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract request_cluster_state function from sched.cr to request_cluster_state.cr. Signed-off-by: Ren Wen <15991987063(a)163.com> --- src/lib/sched.cr | 108 +----------------------- src/scheduler/request_cluster_state.cr | 111 +++++++++++++++++++++++++ 2 files changed, 112 insertions(+), 107 deletions(-) create mode 100644 src/scheduler/request_cluster_state.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..077b071 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/request_cluster_state" class Sched property es @@ -49,113 +50,6 @@ class Sched @redis.hash_del("sched/host2queues", hostname) end - # return: - # Hash(String, Hash(String, String)) - def get_cluster_state(cluster_id) - cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) - if cluster_state - cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) - else - cluster_state = Hash(String, Hash(String, String)).new - end - return cluster_state - end - - # Update job info according to cluster id. - def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) - cluster_state = get_cluster_state(cluster_id) - if cluster_state[job_id]? - cluster_state[job_id].merge!(job_info) - @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) - end - end - - # Return response according to different request states. - # all request states: - # wait_ready | abort | failed | finished | wait_finish | - # write_state | roles_ip - def request_cluster_state(env) - request_state = env.params.query["state"] - job_id = env.params.query["job_id"] - cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! - cluster_state = "" - - states = {"abort" => "abort", - "finished" => "finish", - "failed" => "abort", - "wait_ready" => "ready", - "wait_finish" => "finish"} - - case request_state - when "abort", "finished", "failed" - # update node state only - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - when "wait_ready" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - @block_helper.block_until_finished(cluster_id) { - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - cluster_state == "ready" || cluster_state == "abort" - } - - return cluster_state - when "wait_finish" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - while 1 - sleep(10) - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - break if (cluster_state == "finish" || cluster_state == "abort") - end - - return cluster_state - when "write_state" - node_roles = env.params.query["node_roles"] - node_ip = env.params.query["ip"] - direct_ips = env.params.query["direct_ips"] - direct_macs = env.params.query["direct_macs"] - - job_info = {"roles" => node_roles, - "ip" => node_ip, - "direct_ips" => direct_ips, - "direct_macs" => direct_macs} - update_cluster_state(cluster_id, job_id, job_info) - when "roles_ip" - role = "server" - role_state = get_role_state(cluster_id, role) - raise "Missing #{role} state in cluster state" unless role_state - return "server=#{role_state["ip"]}\n" \ - "direct_server_ips=#{role_state["direct_ips"]}" - end - - # show cluster state - return @redis.hash_get("sched/cluster_state", cluster_id) - end - - # get the node state of role from cluster_state - private def get_role_state(cluster_id, role) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |role_state| - return role_state if role_state["roles"] == role - end - end - - # node_state: "finish" | "ready" - def sync_cluster_state(cluster_id, job_id, node_state) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |host_state| - state = host_state["state"] - return "abort" if state == "abort" - end - - cluster_state.each_value do |host_state| - state = host_state["state"] - next if "#{state}" == "#{node_state}" - return "retry" - end - - # cluster state is node state when all nodes are normal - return node_state - end - # get cluster config using own lkp_src cluster file, # a hash type will be returned def get_cluster_config(cluster_file, lkp_initrd_user, os_arch) diff --git a/src/scheduler/request_cluster_state.cr b/src/scheduler/request_cluster_state.cr new file mode 100644 index 0000000..07ba6fd --- /dev/null +++ b/src/scheduler/request_cluster_state.cr @@ -0,0 +1,111 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + # return: + # Hash(String, Hash(String, String)) + def get_cluster_state(cluster_id) + cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) + if cluster_state + cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) + else + cluster_state = Hash(String, Hash(String, String)).new + end + return cluster_state + end + + # Update job info according to cluster id. + def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) + cluster_state = get_cluster_state(cluster_id) + if cluster_state[job_id]? + cluster_state[job_id].merge!(job_info) + @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) + end + end + + # Return response according to different request states. + # all request states: + # wait_ready | abort | failed | finished | wait_finish | + # write_state | roles_ip + def request_cluster_state(env) + request_state = env.params.query["state"] + job_id = env.params.query["job_id"] + cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! + cluster_state = "" + + states = {"abort" => "abort", + "finished" => "finish", + "failed" => "abort", + "wait_ready" => "ready", + "wait_finish" => "finish"} + + case request_state + when "abort", "finished", "failed" + # update node state only + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + when "wait_ready" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + @block_helper.block_until_finished(cluster_id) { + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + cluster_state == "ready" || cluster_state == "abort" + } + + return cluster_state + when "wait_finish" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + while 1 + sleep(10) + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + break if (cluster_state == "finish" || cluster_state == "abort") + end + + return cluster_state + when "write_state" + node_roles = env.params.query["node_roles"] + node_ip = env.params.query["ip"] + direct_ips = env.params.query["direct_ips"] + direct_macs = env.params.query["direct_macs"] + + job_info = {"roles" => node_roles, + "ip" => node_ip, + "direct_ips" => direct_ips, + "direct_macs" => direct_macs} + update_cluster_state(cluster_id, job_id, job_info) + when "roles_ip" + role = "server" + role_state = get_role_state(cluster_id, role) + raise "Missing #{role} state in cluster state" unless role_state + return "server=#{role_state["ip"]}\n" \ + "direct_server_ips=#{role_state["direct_ips"]}" + end + + # show cluster state + return @redis.hash_get("sched/cluster_state", cluster_id) + end + + # get the node state of role from cluster_state + private def get_role_state(cluster_id, role) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |role_state| + return role_state if role_state["roles"] == role + end + end + + # node_state: "finish" | "ready" + def sync_cluster_state(cluster_id, job_id, node_state) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |host_state| + state = host_state["state"] + return "abort" if state == "abort" + end + + cluster_state.each_value do |host_state| + state = host_state["state"] + next if "#{state}" == "#{node_state}" + return "retry" + end + + # cluster state is node state when all nodes are normal + return node_state + end +end -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci] sched: refactor sched class for the lkp cluster sync
by Ren Wen 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract request_cluster_state function from sched.cr to request_cluster_state.cr Signed-off-by: Ren Wen <15991987063(a)163.com> --- src/lib/sched.cr | 108 +----------------------- src/scheduler/request_cluster_state.cr | 111 +++++++++++++++++++++++++ 2 files changed, 112 insertions(+), 107 deletions(-) create mode 100644 src/scheduler/request_cluster_state.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 6aba6cd..1b6eb8a 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -16,6 +16,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" require "../scheduler/close_job" +require "../scheduler/request_cluster_state" class Sched property es @@ -50,113 +51,6 @@ class Sched @redis.hash_del("sched/host2queues", hostname) end - # return: - # Hash(String, Hash(String, String)) - def get_cluster_state(cluster_id) - cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) - if cluster_state - cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) - else - cluster_state = Hash(String, Hash(String, String)).new - end - return cluster_state - end - - # Update job info according to cluster id. - def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) - cluster_state = get_cluster_state(cluster_id) - if cluster_state[job_id]? - cluster_state[job_id].merge!(job_info) - @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) - end - end - - # Return response according to different request states. - # all request states: - # wait_ready | abort | failed | finished | wait_finish | - # write_state | roles_ip - def request_cluster_state(env) - request_state = env.params.query["state"] - job_id = env.params.query["job_id"] - cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! - cluster_state = "" - - states = {"abort" => "abort", - "finished" => "finish", - "failed" => "abort", - "wait_ready" => "ready", - "wait_finish" => "finish"} - - case request_state - when "abort", "finished", "failed" - # update node state only - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - when "wait_ready" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - @block_helper.block_until_finished(cluster_id) { - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - cluster_state == "ready" || cluster_state == "abort" - } - - return cluster_state - when "wait_finish" - update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) - while 1 - sleep(10) - cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) - break if (cluster_state == "finish" || cluster_state == "abort") - end - - return cluster_state - when "write_state" - node_roles = env.params.query["node_roles"] - node_ip = env.params.query["ip"] - direct_ips = env.params.query["direct_ips"] - direct_macs = env.params.query["direct_macs"] - - job_info = {"roles" => node_roles, - "ip" => node_ip, - "direct_ips" => direct_ips, - "direct_macs" => direct_macs} - update_cluster_state(cluster_id, job_id, job_info) - when "roles_ip" - role = "server" - role_state = get_role_state(cluster_id, role) - raise "Missing #{role} state in cluster state" unless role_state - return "server=#{role_state["ip"]}\n" \ - "direct_server_ips=#{role_state["direct_ips"]}" - end - - # show cluster state - return @redis.hash_get("sched/cluster_state", cluster_id) - end - - # get the node state of role from cluster_state - private def get_role_state(cluster_id, role) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |role_state| - return role_state if role_state["roles"] == role - end - end - - # node_state: "finish" | "ready" - def sync_cluster_state(cluster_id, job_id, node_state) - cluster_state = get_cluster_state(cluster_id) - cluster_state.each_value do |host_state| - state = host_state["state"] - return "abort" if state == "abort" - end - - cluster_state.each_value do |host_state| - state = host_state["state"] - next if "#{state}" == "#{node_state}" - return "retry" - end - - # cluster state is node state when all nodes are normal - return node_state - end - # get cluster config using own lkp_src cluster file, # a hash type will be returned def get_cluster_config(cluster_file, lkp_initrd_user, os_arch) diff --git a/src/scheduler/request_cluster_state.cr b/src/scheduler/request_cluster_state.cr new file mode 100644 index 0000000..ac6cb8e --- /dev/null +++ b/src/scheduler/request_cluster_state.cr @@ -0,0 +1,111 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + # Return response according to different request states. + # all request states: + # wait_ready | abort | failed | finished | wait_finish | + # write_state | roles_ip + def request_cluster_state(env) + request_state = env.params.query["state"] + job_id = env.params.query["job_id"] + cluster_id = @redis.hash_get("sched/id2cluster", job_id).not_nil! + cluster_state = "" + + states = {"abort" => "abort", + "finished" => "finish", + "failed" => "abort", + "wait_ready" => "ready", + "wait_finish" => "finish"} + + case request_state + when "abort", "finished", "failed" + # update node state only + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + when "wait_ready" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + @block_helper.block_until_finished(cluster_id) { + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + cluster_state == "ready" || cluster_state == "abort" + } + + return cluster_state + when "wait_finish" + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) + while 1 + sleep(10) + cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) + break if (cluster_state == "finish" || cluster_state == "abort") + end + + return cluster_state + when "write_state" + node_roles = env.params.query["node_roles"] + node_ip = env.params.query["ip"] + direct_ips = env.params.query["direct_ips"] + direct_macs = env.params.query["direct_macs"] + + job_info = {"roles" => node_roles, + "ip" => node_ip, + "direct_ips" => direct_ips, + "direct_macs" => direct_macs} + update_cluster_state(cluster_id, job_id, job_info) + when "roles_ip" + role = "server" + role_state = get_role_state(cluster_id, role) + raise "Missing #{role} state in cluster state" unless role_state + return "server=#{role_state["ip"]}\n" \ + "direct_server_ips=#{role_state["direct_ips"]}" + end + + # show cluster state + return @redis.hash_get("sched/cluster_state", cluster_id) + end + + # node_state: "finish" | "ready" + def sync_cluster_state(cluster_id, job_id, node_state) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |host_state| + state = host_state["state"] + return "abort" if state == "abort" + end + + cluster_state.each_value do |host_state| + state = host_state["state"] + next if "#{state}" == "#{node_state}" + return "retry" + end + + # cluster state is node state when all nodes are normal + return node_state + end + + # return: + # Hash(String, Hash(String, String)) + def get_cluster_state(cluster_id) + cluster_state = @redis.hash_get("sched/cluster_state", cluster_id) + if cluster_state + cluster_state = Hash(String, Hash(String, String)).from_json(cluster_state) + else + cluster_state = Hash(String, Hash(String, String)).new + end + return cluster_state + end + + # Update job info according to cluster id. + def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) + cluster_state = get_cluster_state(cluster_id) + if cluster_state[job_id]? + cluster_state[job_id].merge!(job_info) + @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) + end + end + + # get the node state of role from cluster_state + private def get_role_state(cluster_id, role) + cluster_state = get_cluster_state(cluster_id) + cluster_state.each_value do |role_state| + return role_state if role_state["roles"] == role + end + end +end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] monitoring/filter.cr: query key support regular match
by Wu Zhende 09 Nov '20

09 Nov '20
[Why] Can use regular match in query's keys. [Example] 1: monitor job.*= query=>{"job.*":null} if one log's key include "job", will be matched. 2: monitor .*= this can match all logs Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/monitoring/filter.cr | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/src/monitoring/filter.cr b/src/monitoring/filter.cr index 88702b6..8e672e0 100644 --- a/src/monitoring/filter.cr +++ b/src/monitoring/filter.cr @@ -56,7 +56,8 @@ class Filter def match_query(query : Hash(String, JSON::Any), msg : Hash(String, JSON::Any)) query.each do |key, value| - return false unless msg.has_key?(key) + key = find_real_key(key, msg.keys) unless msg.has_key?(key) + return false unless key values = value.as_a next if values.includes?(nil) || values.includes?(msg[key]?) @@ -66,6 +67,12 @@ class Filter return true end + private def find_real_key(rule, keys) + keys.each do |key| + return key if key.to_s =~ /#{rule}/ + end + end + private def regular_match(rules, string) rules.each do |rule| return true if string =~ /#{rule}/ -- 2.23.0
2 1
0 0
[PATCH v8 lkp-tests 1/2] jobs/iozone-bs.yaml: combine multiple test parameter to single
by Lu Kaiyi 09 Nov '20

09 Nov '20
[why] avoid explosion of parameter for iozone-bs.yaml [how] combine multiple test parameter to single Signed-off-by: Lu Kaiyi <2392863668(a)qq.com> --- jobs/iozone-bs.yaml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/jobs/iozone-bs.yaml b/jobs/iozone-bs.yaml index e2cd9f48..53f1ac46 100644 --- a/jobs/iozone-bs.yaml +++ b/jobs/iozone-bs.yaml @@ -2,9 +2,7 @@ suite: iozone category: benchmark file_size: 4g -write_rewrite: true -read_reread: true -random_read_write: true +test: write, read, rand_rw block_size: - 64k -- 2.23.0
1 0
0 0
[PATCH lkp-tests] lib/monitor: query key support regular match
by Wu Zhende 09 Nov '20

09 Nov '20
[Why] Enables more flexible monitoring conditions. When I use "monitor job.*=", will get "query=>{"job":{"*":null}}". It's not what I want. I want "query=>{"job.*": null}". Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- lib/monitor.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/monitor.rb b/lib/monitor.rb index 56283e10..67c2389a 100755 --- a/lib/monitor.rb +++ b/lib/monitor.rb @@ -51,7 +51,7 @@ class Monitor def merge_overrides return if @overrides.empty? - revise_hash(@query, @overrides, true) + @query.merge!(@overrides) end def field_check -- 2.23.0
2 1
0 0
[PATCH v1 compass-ci] sched: refactor sched class for the close job function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract close_job function from sched.cr to close_job.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 28 +--------------------------- src/scheduler/close_job.cr | 31 +++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 27 deletions(-) create mode 100644 src/scheduler/close_job.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..6aba6cd 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/close_job" class Sched property es @@ -404,33 +405,6 @@ class Sched @redis.hash_set("sched/tbox2ssh_port", testbox, ssh_port) end - def delete_access_key_file(job : Job) - File.delete(job.access_key_file) if File.exists?(job.access_key_file) - end - - def close_job(job_id : String) - job = @redis.get_job(job_id) - - delete_access_key_file(job) if job - - response = @es.set_job_content(job) - if response["_id"] == nil - # es update fail, raise exception - raise "es set job content fail!" - end - - response = @task_queue.hand_over_task( - "sched/#{job.queue}", "extract_stats", job_id - ) - if response[0] != 201 - raise "#{response}" - end - - @redis.remove_finished_job(job_id) - - return %({"job_id": "#{job_id}", "job_state": "complete"}) - end - private def query_consumable_keys(shortest_queue_name) keys = [] of String search = "sched/" + shortest_queue_name + "*" diff --git a/src/scheduler/close_job.cr b/src/scheduler/close_job.cr new file mode 100644 index 0000000..d071d69 --- /dev/null +++ b/src/scheduler/close_job.cr @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def close_job(job_id : String) + job = @redis.get_job(job_id) + + delete_access_key_file(job) if job + + response = @es.set_job_content(job) + if response["_id"] == nil + # es update fail, raise exception + raise "es set job content fail!" + end + + response = @task_queue.hand_over_task( + "sched/#{job.queue}", "extract_stats", job_id + ) + if response[0] != 201 + raise "#{response}" + end + + @redis.remove_finished_job(job_id) + + return %({"job_id": "#{job_id}", "job_state": "complete"}) + end + + def delete_access_key_file(job : Job) + File.delete(job.access_key_file) if File.exists?(job.access_key_file) + end +end -- 2.23.0
2 1
0 0
[PATCH compass-ci] sched: refactor sched class for the close job function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract close_job function from sched.cr to close_job.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 28 +--------------------------- src/scheduler/close_job.cr | 31 +++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 27 deletions(-) create mode 100644 src/scheduler/close_job.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index 3709cb1..6aba6cd 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -15,6 +15,7 @@ require "../scheduler/elasticsearch_client" require "../scheduler/find_job_boot" require "../scheduler/find_next_job_boot" +require "../scheduler/close_job" class Sched property es @@ -404,33 +405,6 @@ class Sched @redis.hash_set("sched/tbox2ssh_port", testbox, ssh_port) end - def delete_access_key_file(job : Job) - File.delete(job.access_key_file) if File.exists?(job.access_key_file) - end - - def close_job(job_id : String) - job = @redis.get_job(job_id) - - delete_access_key_file(job) if job - - response = @es.set_job_content(job) - if response["_id"] == nil - # es update fail, raise exception - raise "es set job content fail!" - end - - response = @task_queue.hand_over_task( - "sched/#{job.queue}", "extract_stats", job_id - ) - if response[0] != 201 - raise "#{response}" - end - - @redis.remove_finished_job(job_id) - - return %({"job_id": "#{job_id}", "job_state": "complete"}) - end - private def query_consumable_keys(shortest_queue_name) keys = [] of String search = "sched/" + shortest_queue_name + "*" diff --git a/src/scheduler/close_job.cr b/src/scheduler/close_job.cr new file mode 100644 index 0000000..d071d69 --- /dev/null +++ b/src/scheduler/close_job.cr @@ -0,0 +1,31 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def close_job(job_id : String) + job = @redis.get_job(job_id) + + delete_access_key_file(job) if job + + response = @es.set_job_content(job) + if response["_id"] == nil + # es update fail, raise exception + raise "es set job content fail!" + end + + response = @task_queue.hand_over_task( + "sched/#{job.queue}", "extract_stats", job_id + ) + if response[0] != 201 + raise "#{response}" + end + + @redis.remove_finished_job(job_id) + + return %({"job_id": "#{job_id}", "job_state": "complete"}) + end + + def delete_access_key_file(job : Job) + File.delete(job.access_key_file) if File.exists?(job.access_key_file) + end +end -- 2.23.0
1 1
0 0
[PATCH compass-ci] container: fix failed to build kibana images
by Liu Yinsi 09 Nov '20

09 Nov '20
[why] when build kibana images in x86_machine, error message: [root@localhost kibana]# ./build Sending build context to Docker daemon 5.12kB Step 1/3 : FROM gagara/kibana-oss-arm64:7.6.2 7.6.2: Pulling from gagara/kibana-oss-arm64 38163f410fa0: Pull complete 69a4d016f221: Pull complete 95e6c6e7c9ca: Pull complete d13f429dd982: Pull complete 508bb3330fb2: Pull complete 9634e726f1b6: Pull complete 9c26c37850c8: Pull complete 0d0ad8467060: Pull complete 940f92726f8b: Pull complete Digest: sha256:541632b7e9780a007f8a8be82ac8853ddcebcb04a596c00500b73f77eacfbd16 Status: Downloaded newer image for gagara/kibana-oss-arm64:7.6.2 ---> f482a0472f78 Step 2/3 : MAINTAINER Wu Zhende <wuzhende666(a)163.com> ---> Running in cfa86d8ce976 Removing intermediate container cfa86d8ce976 ---> 3be6c5f24d4b Step 3/3 : RUN sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml ---> Running in ff455f66df8b standard_init_linux.go:220: exec user process caused "exec format error" libcontainer: container start initialization failed: standard_init_linux.go:220: exec user process caused "exec format error" The command '/bin/sh -c sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml' returned a non-zero code: 1 because arm base image not support to build in x86 machine. [how] 1. use images dict to store arm and x86 base images 2. use $(arch) to choose base image according different according to different system architecture Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- container/kibana/Dockerfile | 4 +++- container/kibana/build | 8 +++++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/container/kibana/Dockerfile b/container/kibana/Dockerfile index 35802fe..6e0dba0 100644 --- a/container/kibana/Dockerfile +++ b/container/kibana/Dockerfile @@ -1,7 +1,9 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM gagara/kibana-oss-arm64:7.6.2 +ARG BASE_IMAGE + +FROM BASE_IMAGE # docker image borrowed from hub.docker.com/r/gagara/kibana-oss-arm64 diff --git a/container/kibana/build b/container/kibana/build index a7e4717..60fdea2 100755 --- a/container/kibana/build +++ b/container/kibana/build @@ -3,4 +3,10 @@ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. # frozen_string_literal: true -system 'docker build -t kibana:7.6.2 .' +BASE_IMAGE_DICT = {'aarch64'=>'gagara/kibana-oss-arm64:7.6.2', + 'x86_64'=>'kibana:7.6.2' +} + +BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] + +system "docker build -t kibana:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." -- 2.23.0
2 1
0 0
[PATCH v3 compass-ci] container: fix failed to build archlinux images in x86_64 machine
by Liu Yinsi 09 Nov '20

09 Nov '20
[why] when build archlinux images in x86_64 machine error: error: failed retrieving file 'core.db' from mirrors.tuna.tsinghua.edu.cn : The requested URL returned error:404 error: failed retrieving file 'core.db' from mirrors.163.com : The requested URL returned error: 404 error: failed retrieving file 'core.db' from mirror.archlinuxarm.org : The requested URL returned error: 404 error: failed to update core (failed to retrieve some files) error: failed retrieving file 'extra.db' from mirrors.tuna.tsinghua.edu.cn : The requested URL returned error:404 error: failed retrieving file 'extra.db' from mirrors.163.com : The requested URL returned error: 404 error: failed retrieving file 'extra.db' from mirror.archlinuxarm.org : Resolving timed out after 10000 milliseconds error: failed to update extra (download library error) error: failed retrieving file 'community.db' from mirrors.tuna.tsinghua.edu.cn : The requested URL returned error: 404 error: failed retrieving file 'community.db' from mirrors.163.com : The requested URL returned error: 404 error: failed retrieving file 'community.db' from mirror.archlinuxarm.org : The requested URL returned error:404 error: failed to update community (failed to retrieve some files) error: failed to synchronize all databases The command '/bin/sh -c pacman --needed --noprogressbar --noconfirm -Syu && pacman --needed --noprogressbar --noconfirm -S bash zsh git openssh rsync make gcc tzdata sudo coreutils util-linux vim gawk' returned a non-zero code: 1 because archlinux mirror not support x86_64 machine. [how] root/etc/pacman.d/mirrorlist => root/aarch64/etc/pacman.d/mirrorlist root/x86_64/etc/pacman.d/mirrorlist use ARCH=$(arch) to choose mirrorlist according to different system architecture. Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- container/archlinux/Dockerfile | 6 +++++- container/archlinux/build | 2 +- .../archlinux/root/{ => aarch64}/etc/pacman.d/mirrorlist | 0 container/archlinux/root/x86_64/etc/pacman.d/mirrorlist | 1 + 4 files changed, 7 insertions(+), 2 deletions(-) rename container/archlinux/root/{ => aarch64}/etc/pacman.d/mirrorlist (100%) create mode 100644 container/archlinux/root/x86_64/etc/pacman.d/mirrorlist diff --git a/container/archlinux/Dockerfile b/container/archlinux/Dockerfile index c0f05d3..1f80ae0 100644 --- a/container/archlinux/Dockerfile +++ b/container/archlinux/Dockerfile @@ -5,7 +5,11 @@ FROM lopsided/archlinux MAINTAINER Wu Fenguang <wfg(a)mail.ustc.edu.cn> -COPY root / +ARG ARCH + +COPY root/$ARCH / + RUN chmod 755 /etc + RUN pacman --needed --noprogressbar --noconfirm -Syu && \ pacman --needed --noprogressbar --noconfirm -S bash zsh git openssh rsync make gcc tzdata sudo coreutils util-linux vim gawk diff --git a/container/archlinux/build b/container/archlinux/build index 81feda2..9749489 100755 --- a/container/archlinux/build +++ b/container/archlinux/build @@ -2,4 +2,4 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -docker build -t archlinux:testbed . +docker build --build-arg ARCH=$(arch) -t archlinux:testbed . diff --git a/container/archlinux/root/etc/pacman.d/mirrorlist b/container/archlinux/root/aarch64/etc/pacman.d/mirrorlist similarity index 100% rename from container/archlinux/root/etc/pacman.d/mirrorlist rename to container/archlinux/root/aarch64/etc/pacman.d/mirrorlist diff --git a/container/archlinux/root/x86_64/etc/pacman.d/mirrorlist b/container/archlinux/root/x86_64/etc/pacman.d/mirrorlist new file mode 100644 index 0000000..556fac8 --- /dev/null +++ b/container/archlinux/root/x86_64/etc/pacman.d/mirrorlist @@ -0,0 +1 @@ +Server = http://mirrors.tuna.tsinghua.edu.cn/archlinux/$repo/os/$arch -- 2.23.0
1 0
0 0
[PATCH v5 compass-ci] compare values by each metrics
by Lu Weitao 09 Nov '20

09 Nov '20
compare values by each metrics based on groups matrices, and format compare result as echart data_set background: To support compare with user-defined template feature, the work-flow is: load compare_template.yaml --> query_results(ES) ---> auto group jobs_list ---> create groups_matrices ---> compare_values by each metrics ---> format/show results current patch do: compare_values by each metrics ---> format/show results Signed-off-by: Lu Weitao <luweitaobe(a)163.com> Signed-off-by: Lu Weitao <luweitaobe(a)163.com> --- lib/compare_data_format.rb | 18 +++++++ lib/compare_matrixes.rb | 103 +++++++++++++++++++++++++++++++++++++ 2 files changed, 121 insertions(+) create mode 100644 lib/compare_data_format.rb diff --git a/lib/compare_data_format.rb b/lib/compare_data_format.rb new file mode 100644 index 0000000..3d82550 --- /dev/null +++ b/lib/compare_data_format.rb @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ or GPL-2.0 +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +# ---------------------------------------------------------------------------------------------------- +# format compare results for a specific format +# + +def format_for_echart(metrics_compare_results, template_params) + echart_result = {} + echart_result['title'] = template_params['title'] + echart_result['unit'] = template_params['unit'] + x_params = template_params['x_params'] + echart_result['x_name'] = x_params.join('|') if x_params + echart_result['tables'] = metrics_compare_results + + echart_result +end diff --git a/lib/compare_matrixes.rb b/lib/compare_matrixes.rb index 078028a..119d42d 100644 --- a/lib/compare_matrixes.rb +++ b/lib/compare_matrixes.rb @@ -6,6 +6,7 @@ LKP_SRC ||= ENV['LKP_SRC'] || File.dirname(__dir__) require 'set' require 'json/ext' require_relative 'themes' +require_relative './compare_data_format.rb' require "#{LKP_SRC}/lib/stats" FAILURE_PATTERNS = IO.read("#{LKP_SRC}/etc/failure").split("\n") @@ -399,6 +400,108 @@ def compare_group_matrices(group_matrices, options) result_str end +# input: groups_matrices +# { +# group_key_1 => { +# dimension_1 => matrix_1, (openeuler 20.03) +# dimension_2 => matrix_2, (openeuler 20.09) +# dimension_3 => matrix_3, (centos 7.6) +# }, +# group_key_2 => {...} +# } +# +# output: compare_metrics_values +# { +# group_key_1 => { +# metric_1 => { +# 'average' => { +# 'dimension_1' => xxx +# 'dimension_2' => xxx +# 'dimension_3' => xxx +# }, +# 'standard_deviation' => { +# 'dimension_1' => xxx +# 'dimension_2' => xxx +# 'dimension_3' => xxx +# }, +# 'change' => { +# 'dimension_2 vs dimension_1' => xxx +# 'dimension_3 vs dimension_1' => xxx +# 'dimension_3 vs dimension_2' => xxx +# } +# }, +# metric_2 => {...} +# } +# } +def compare_metrics_values(groups_matrices) + metrics_compare_values = {} + groups_matrices.each do |group_key, dimensions| + metrics_compare_values[group_key] = get_metric_values(dimensions) + end + + metrics_compare_values +end + +def get_metric_values(dimensions) + metrics_values = {} + dimensions.each do |dim, matrix| + matrix.each do |metric, values| + assign_metric_values(metrics_values, dim, metric, values) + end + end + assign_metric_change(metrics_values) + + metrics_values +end + +def assign_metric_values(metrics_values, dim, metric, values) + metrics_values[metric] ||= {} + metrics_values[metric]['average'] ||= {} + metrics_values[metric]['standard_deviation'] ||= {} + metric_value = get_values(values, true) + metrics_values[metric]['average'][dim] = metric_value[:average] + metrics_values[metric]['standard_deviation'][dim] = metric_value[:stddev] +end + +def assign_metric_change(metrics_values) + metrics_values.each do |metric, values| + metrics_values[metric]['change'] = {} + next if values['average'].size < 2 + + dimension_list = values['average'].keys + dimension_groups = get_dimensions_combination(dimension_list) + dimension_groups.each do |base_dimension, challenge_dimension| + change = get_compare_value(values['average'][base_dimension], values['average'][challenge_dimension], true) + values['change'] = { "#{challenge_dimension} vs #{base_dimension}" => change } + end + end +end + +# input: dimension_list +# eg: ['openeuler 20.03', 'debian sid', 'centos 7.6'] +# output: Array(base_dimension: String, challenge_dimension: String) +# [ +# ['openeuler 20.03', 'debian sid'], +# ['openeuler 20.03', 'centos 7.6'], +# ['debian sid', 'centos 7.6'] +# ] +def get_dimentions_combination(dimension_list) + dims = [] + dimension_list_size = dimension_list.size + (1..dimension_list_size - 1).each do |i| + (i..dimension_list_size - 1).each do |j| + dims << [dimension_list[i - 1], dimension_list[j]] + end + end + + dims +end + +def show_compare_result(metrics_compare_results, template_params) + echart_results = format_for_echart(metrics_compare_results, template_params) + print JSON.pretty_generate(echart_results) +end + # Format Fields def format_fails_runs(fails, runs) -- 2.23.0
2 2
0 0
[PATCH v8 compass-ci 3/3] assign-account: add my_info for assign account
by Luan Shengde 09 Nov '20

09 Nov '20
add my info when execute apply account command [why] when applying account, it's need to add my_info to the default yaml file: ~/.config/compass-ci/defaults/account.yaml my_info: - my_email - my_name - my_uuid [how] add my_info along with the pub_key when applying account Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/assign-account/answerback-email.rb | 94 +++++++++++--------- 1 file changed, 54 insertions(+), 40 deletions(-) diff --git a/container/assign-account/answerback-email.rb b/container/assign-account/answerback-email.rb index bb8e809..686a327 100755 --- a/container/assign-account/answerback-email.rb +++ b/container/assign-account/answerback-email.rb @@ -12,58 +12,63 @@ require 'mail' require 'set' require 'optparse' require_relative '../defconfig' +require_relative '../../lib/es_client' names = Set.new %w[ JUMPER_HOST JUMPER_PORT - SEND_MAIL_HOST_INTERNET - SEND_MAIL_PORT_INTERNET + SEND_MAIL_HOST + SEND_MAIL_PORT ] defaults = relevant_defaults(names) JUMPER_HOST = defaults['JUMPER_HOST'] || 'api.compass-ci.openeuler.org' JUMPER_PORT = defaults['JUMPER_PORT'] || 29999 -SEND_MAIL_HOST = defaults['SEND_MAIL_HOST_INTERNET'] || 'localhost' -SEND_MAIL_PORT = defaults['SEND_MAIL_PORT_INTERNET'] || 11312 +SEND_MAIL_HOST = defaults['SEND_MAIL_HOST'] || 'localhost' +SEND_MAIL_PORT = defaults['SEND_MAIL_PORT'] || 49000 -$apply_info = { +my_info = { 'my_email' => nil, + 'my_name' => nil, + 'my_uuid' => %x(uuidgen).chomp, 'my_ssh_pubkey' => nil } -def init_info(email_file) +def init_info(email_file, my_info) mail_content = Mail.read(email_file) - - $apply_info['my_email'] = mail_content.from[0] - $apply_info['my_ssh_pubkey'] = if mail_content.part[1].filename == 'id_rsa.pub' - mail_content.part[1].body.decoded.gsub(/\r|\n/, '') - end - - $apply_info + my_info['my_email'] = mail_content.from[0] + my_info['my_name'] = mail_content.From.unparsed_value.gsub(/ <[^<>]*>/, '') + my_info['my_ssh_pubkey'] = if mail_content.part[1].filename == 'id_rsa.pub' + mail_content.part[1].body.decoded + end end options = OptionParser.new do |opts| - opts.banner = "Usage: answerback-mail.rb [--email email] [--ssh-pubkey pub_key_file] [--raw-email email_file]\n" + opts.banner = 'Usage: answerback-mail.rb [-e|--email email] ' + opts.banner += "[-s|--ssh-pubkey pub_key_file] [-f|--raw-email email_file]\n" opts.banner += " -e or -f is required\n" opts.banner += ' -s is optional when use -e' opts.separator '' opts.separator 'options:' - opts.on('-e|--email email_address', 'appoint email address') do |email_address| - $apply_info['my_email'] = email_address + opts.on('-e email_address', '--email email_address', 'appoint email address') do |email_address| + my_info['my_email'] = email_address + # when apply account with email address, will get no user name + my_info['my_name'] = '' end - opts.on('-s|--ssh-pubkey pub_key_file', 'ssh pub_key file, enable password-less login') do |pub_key_file| - $apply_info['my_ssh_pubkey'] = File.read(pub_key_file) + opts.on('-s pub_key_file', '--ssh-pubkey pub_key_file', \ + 'ssh pub_key file, enable password-less login') do |pub_key_file| + my_info['my_ssh_pubkey'] = File.read(pub_key_file) end - opts.on('-f|--raw-email email_file', 'email file') do |email_file| - init_info(email_file) + opts.on('-f email_file', '--raw-email email_file', 'email file') do |email_file| + init_info(email_file, my_info) end - opts.on_tail('-h|--help', 'show this message') do + opts.on_tail('-h', '--help', 'show this message') do puts opts exit end @@ -71,21 +76,24 @@ end options.parse!(ARGV) -def build_message(email, acct_infos) +def build_message(email, account_info) message = <<~EMAIL_MESSAGE To: #{email} - Subject: jumper account is ready + Subject: [compass-ci] jumper account is ready Dear user: Thank you for joining us. You can use the following command to login the jumper server: - login command: - ssh -p #{acct_infos['jumper_port']} #{acct_infos['account']}@#{acct_infos['jumper_ip']} + Login command: + ssh -p #{account_info['jumper_port']} #{account_info['my_login_name']}@#{account_info['jumper_host']} + + Account password: + #{account_info['my_password']} - account password: - #{acct_infos['passwd']} + Suggest: + If you use the password to login, change it in time. regards compass-ci @@ -94,26 +102,32 @@ def build_message(email, acct_infos) return message end -def account_info(pub_key) - account_info_str = if pub_key.nil? - %x(curl -XGET '#{JUMPER_HOST}:#{JUMPER_PORT}/assign_account') - else - %x(curl -XGET '#{JUMPER_HOST}:#{JUMPER_PORT}/assign_account' -d "pub_key: #{pub_key}") - end +def apply_account(my_info) + account_info_str = %x(curl -XGET '#{JUMPER_HOST}:#{JUMPER_PORT}/assign_account' -d '#{my_info.to_json}') JSON.parse account_info_str end -def send_account +def send_account(my_info) message = "No email address specified\n" - message += "use -e email_address add a email address\n" + message += "use -e to add a email address\n" message += 'or use -f to add a email file' - raise message if $apply_info['my_email'].nil? + raise message if my_info['my_email'].nil? - acct_info = account_info($apply_info['my_ssh_pubkey']) - - message = build_message($apply_info['my_email'], acct_info) + account_info = apply_account(my_info) + # for manually assign account, there will be no my_commit_url + # but the key my_commit_url is required for es + my_info['my_commit_url'] = '' + my_info['my_login_name'] = account_info['my_login_name'] + my_info.delete 'my_ssh_pubkey' + store_account_info(my_info) + message = build_message(my_info['my_email'], account_info) %x(curl -XPOST '#{SEND_MAIL_HOST}:#{SEND_MAIL_PORT}/send_mail_text' -d "#{message}") end -send_account +def store_account_info(my_info) + es = ESClient.new(index: 'accounts') + es.put_source_by_id(my_info['my_email'], my_info) +end + +send_account(my_info) -- 2.23.0
1 0
0 0
[PATCH v8 compass-ci 2/3] assign-account: add config default yaml
by Luan Shengde 09 Nov '20

09 Nov '20
when assigning account, config the default yaml [why] easier for user to config the default yaml file [how] write my_info to the default yaml file ~/.config/compass-ci/default/account.yaml my_info: - my_email - my_name - my_uuid Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/assign-account/get_account_info.rb | 88 ++++++++++++-------- 1 file changed, 53 insertions(+), 35 deletions(-) diff --git a/container/assign-account/get_account_info.rb b/container/assign-account/get_account_info.rb index 2f93d5b..24d476b 100755 --- a/container/assign-account/get_account_info.rb +++ b/container/assign-account/get_account_info.rb @@ -7,17 +7,17 @@ ACCOUNT_DIR dir layout: tree -├── assigned-users -│   ├── user1 -│   ├── user2 -│   ├── user3 -│   ├── ... -├── available-users -│   ├── user11 -│   ├── user12 -│   ├── user13 -│   ├── ... -└── jumper-info +|-- assigned-users +| |-- user1 +| |-- user2 +| |-- user3 +| |-- ... +|-- available-users +| |-- user11 +| |-- user12 +| |-- user13 +| |-- ... +|-- jumper-info assigned-users: store assigned user files available-users: store available user files @@ -30,9 +30,10 @@ API: call graph: setup_jumper_account_info read_account_info - build_account_name + build_account_info read_jumper_info - setup_authorized_key + config_default_yaml + config_authorized_key the returned data for setup_jumper_account_info like: { @@ -44,6 +45,8 @@ the returned data for setup_jumper_account_info like: =end +require 'fileutils' + # get jumper and account info class AccountStorage ACCOUNT_DIR = '/opt/account_data/' @@ -61,12 +64,12 @@ class AccountStorage message = 'no more available users' raise message if files.empty? - account_info = build_account_name(available_dir, files) + account_info = build_account_info(available_dir, files) return account_info end - def build_account_name(available_dir, files) + def build_account_info(available_dir, files) files.sort account_info = [] account_info.push files[0] @@ -93,35 +96,50 @@ class AccountStorage def setup_jumper_account_info account_info = read_account_info jumper_info = read_jumper_info - pub_key = @data['pub_key'] unless @data.nil? - - jumper_ip = jumper_info[0].chomp - jumper_port = jumper_info[1].chomp - account = account_info[0] - passwd = if pub_key.nil? - account_info[1] - else - 'Use pub_key to login' - end - jumper_account_info = { - 'account' => account, - 'passwd' => passwd, - 'jumper_ip' => jumper_ip, - 'jumper_port' => jumper_port + pub_key = @data['my_ssh_pubkey'] unless @data['my_ssh_pubkey'].nil? + + login_name = account_info[0] + password = if pub_key.nil? + account_info[1] + else + 'Use pub_key to login' + end + + new_account_info = { + 'my_login_name' => login_name, + 'my_password' => password, + 'jumper_host' => jumper_info[0].chomp, + 'jumper_port' => jumper_info[1].chomp } - setup_authorized_key(account, pub_key) - return jumper_account_info + config_authorized_key(login_name, pub_key) unless pub_key.nil? + config_default_yaml(login_name) + + return new_account_info + end + + def config_default_yaml(login_name) + default_yaml_dir = File.join('/home', login_name, '.config/compass-ci/defaults') + FileUtils.mkdir_p default_yaml_dir + + # my_email, my_name, my_uuid is required to config default yaml file + # they are added along with 'my_ssh_pubkey' when sending assign account request + File.open("#{default_yaml_dir}/account.yaml", 'a') do |file| + file.puts "my_email: #{@data['my_email']}" + file.puts "my_name: #{@data['my_name']}" + file.puts "my_uuid: #{@data['my_uuid']}" + end + %x(chown -R #{login_name}:#{login_name} "/home/#{login_name}/.config") end - def setup_authorized_key(account, pub_key) - ssh_dir = File.join('/home/', account, '.ssh') + def config_authorized_key(login_name, pub_key) + ssh_dir = File.join('/home/', login_name, '.ssh') Dir.mkdir ssh_dir, 0o700 Dir.chdir ssh_dir f = File.new('authorized_keys', 'w') f.puts pub_key f.close File.chmod 0o600, 'authorized_keys' - %x(chown -R #{account}:#{account} #{ssh_dir}) + %x(chown -R #{login_name}:#{login_name} #{ssh_dir}) end end -- 2.23.0
1 0
0 0
[PATCH compass-ci] container/web-backend-nginx: add proxy for srv-http server
by Lu Weitao 09 Nov '20

09 Nov '20
[why] we required access api by https request [how] use web-backend-nginx server to proxy srv-http server that user can access like: https://api.compass-ci.openeuler.org:11320/result/iozone/taishan200-2288-2s… Signed-off-by: Lu Weitao <luweitaobe(a)163.com> --- container/web-backend-nginx/nginx.conf | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/container/web-backend-nginx/nginx.conf b/container/web-backend-nginx/nginx.conf index b461a2c..c9a4ec4 100644 --- a/container/web-backend-nginx/nginx.conf +++ b/container/web-backend-nginx/nginx.conf @@ -22,5 +22,14 @@ http { # web-backend server proxy_pass http://172.17.0.1:32767; } + + location ~ ^/(result|pub) { + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; + + # srv-http server + proxy_pass http://172.17.0.1:11300; + } } } -- 2.23.0
2 3
0 0
[PATCH v3 compass-ci] sched: refactor sched class for the find job boot function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract find_job_boot function from sched.cr to find_job_boot.cr Extract find_next_job_boot function from sched.cr to find_next_job_boot.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 237 +--------------------------- src/scheduler/find_job_boot.cr | 220 ++++++++++++++++++++++++++ src/scheduler/find_next_job_boot.cr | 14 ++ 3 files changed, 237 insertions(+), 234 deletions(-) create mode 100644 src/scheduler/find_job_boot.cr create mode 100644 src/scheduler/find_next_job_boot.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index a4b12b4..6ecd95d 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -13,6 +13,9 @@ require "../scheduler/jobfile_operate" require "../scheduler/redis_client" require "../scheduler/elasticsearch_client" +require "../scheduler/find_job_boot" +require "../scheduler/find_next_job_boot" + class Sched property es property redis @@ -323,230 +326,6 @@ class Sched @es.set_job_content(job) end - private def ipxe_msg(msg) - "#!ipxe - echo ... - echo #{msg} - echo ... - reboot" - end - - private def grub_msg(msg) - "#!grub - echo ... - echo #{msg} - echo ... - reboot" - end - - private def get_boot_container(job : Job) - response = Hash(String, String).new - response["docker_image"] = "#{job.docker_image}" - response["lkp"] = "http://#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT}" + - JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/lkp-#{job.arch}.cgz") - response["job"] = "http://#{SCHED_HOST}:#{SCHED_PORT}/job_initrd_tmpfs/#{job.id}/job.cgz" - - return response.to_json - end - - private def get_boot_grub(job : Job) - initrd_lkp_cgz = "lkp-#{job.os_arch}.cgz" - - response = "#!grub\n\n" - response += "linux (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" - response += "#{JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/vmlinuz")} user=lkp" - response += " job=/lkp/scheduled/job.yaml RESULT_ROOT=/result/job" - response += " rootovl ip=dhcp ro root=#{job.kernel_append_root}\n" - - response += "initrd (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" - response += JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/initrd.lkp") - response += " (http,#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT})" - response += JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/#{initrd_lkp_cgz}") - response += " (http,#{SCHED_HOST}:#{SCHED_PORT})/job_initrd_tmpfs/" - response += "#{job.id}/job.cgz\n" - - response += "boot\n" - - return response - end - - def touch_access_key_file(job : Job) - FileUtils.touch(job.access_key_file) - end - - def boot_content(job : Job | Nil, boot_type : String) - touch_access_key_file(job) if job - - case boot_type - when "ipxe" - return job ? get_boot_ipxe(job) : ipxe_msg("No job now") - when "grub" - return job ? get_boot_grub(job) : grub_msg("No job now") - when "container" - return job ? get_boot_container(job) : Hash(String, String).new.to_json - else - raise "Not defined boot type #{boot_type}" - end - end - - def rand_queues(queues) - return queues if queues.empty? - - queues_size = queues.size - base = Random.rand(queues_size) - temp_queues = [] of String - - (0..queues_size - 1).each do |index| - temp_queues << queues[(index + base) % queues_size] - end - - return temp_queues - end - - def get_queues(host) - queues = [] of String - - queues_str = @redis.hash_get("sched/host2queues", host) - return queues unless queues_str - - queues_str.split(',', remove_empty: true) do |item| - queues << item.strip - end - - return rand_queues(queues) - end - - def get_job_from_queues(queues, testbox) - job = nil - - queues.each do |queue| - job = prepare_job("sched/#{queue}", testbox) - return job if job - end - - return job - end - - def get_job_boot(host, boot_type) - queues = get_queues(host) - job = get_job_from_queues(queues, host) - - if job - Jobfile::Operate.create_job_cpio(job.dump_to_json_any, Kemal.config.public_folder) - end - - return boot_content(job, boot_type) - end - - # auto submit a job to collect the host information - # grub hostname is link with ":", like "00:01:02:03:04:05" - # remind: if like with "-", last "-05" is treated as host number - # then hostname will be "sut-00-01-02-03-04" !!! - def submit_host_info_job(mac) - host = "sut-#{mac}" - set_host_mac(mac, host) - - Jobfile::Operate.auto_submit_job( - "#{ENV["LKP_SRC"]}/jobs/host-info.yaml", - "testbox: #{host}") - end - - def find_job_boot(env : HTTP::Server::Context) - value = env.params.url["value"] - boot_type = env.params.url["boot_type"] - - case boot_type - when "ipxe" - host = @redis.hash_get("sched/mac2host", normalize_mac(value)) - when "grub" - host = @redis.hash_get("sched/mac2host", normalize_mac(value)) - submit_host_info_job(value) unless host - when "container" - host = value - end - - get_job_boot(host, boot_type) - end - - def find_next_job_boot(env) - hostname = env.params.query["hostname"]? - mac = env.params.query["mac"]? - if !hostname && mac - hostname = @redis.hash_get("sched/mac2host", normalize_mac(mac)) - end - - get_job_boot(hostname, "ipxe") - end - - def get_testbox_boot_content(testbox, boot_type) - job = find_job(testbox) if testbox - Jobfile::Operate.create_job_cpio(job.dump_to_json_any, - Kemal.config.public_folder) if job - - return boot_content(job, boot_type) - end - - private def find_job(testbox : String, count = 1) - tbox_group = JobHelper.match_tbox_group(testbox) - tbox = tbox_group.partition("--")[0] - - queue_list = query_consumable_keys(tbox) - - boxes = ["sched/" + testbox, - "sched/" + tbox_group, - "sched/" + tbox, - "sched/" + tbox_group + "/idle"] - boxes.each do |box| - next if queue_list.select(box).size == 0 - count.times do - job = prepare_job(box, testbox) - return job if job - - sleep(1) unless count == 1 - end - end - - # when find no job, auto submit idle job at background - spawn { auto_submit_idle_job(tbox_group) } - - return nil - end - - private def prepare_job(queue_name, testbox) - response = @task_queue.consume_task(queue_name) - job_id = JSON.parse(response[1].to_json)["id"] if response[0] == 200 - job = nil - - if job_id - begin - job = @es.get_job(job_id.to_s) - rescue ex - puts "Invalid job (id=#{job_id}) in es. Info: #{ex}" - puts ex.inspect_with_backtrace - end - end - - if job - job.update({"testbox" => testbox}) - job.set_result_root - puts %({"job_id": "#{job_id}", "result_root": "/srv#{job.result_root}", "job_state": "set result root"}) - @redis.set_job(job) - end - return job - end - - private def get_idle_job(tbox_group, testbox) - job = prepare_job("sched/#{tbox_group}/idle", testbox) - - # if there has no idle job, auto submit and get 1 - if job.nil? - auto_submit_idle_job(tbox_group) - job = prepare_job("sched/#{tbox_group}/idle", testbox) - end - - return job - end - def auto_submit_idle_job(tbox_group) full_path_patterns = "#{ENV["CCI_REPOS"]}/lab-#{ENV["lab"]}/allot/idle/#{tbox_group}/*.yaml" extra_job_fields = [ @@ -561,16 +340,6 @@ class Sched extra_job_fields) if Dir.glob(full_path_patterns).size > 0 end - private def get_boot_ipxe(job : Job) - response = "#!ipxe\n\n" - response += job.initrds_uri - response += job.kernel_uri - response += job.kernel_params - response += "\nboot\n" - - return response - end - def update_job_parameter(env : HTTP::Server::Context) job_id = env.params.query["job_id"]? if !job_id diff --git a/src/scheduler/find_job_boot.cr b/src/scheduler/find_job_boot.cr new file mode 100644 index 0000000..b5a23c5 --- /dev/null +++ b/src/scheduler/find_job_boot.cr @@ -0,0 +1,220 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def find_job_boot(env : HTTP::Server::Context) + value = env.params.url["value"] + boot_type = env.params.url["boot_type"] + + case boot_type + when "ipxe" + host = @redis.hash_get("sched/mac2host", normalize_mac(value)) + when "grub" + host = @redis.hash_get("sched/mac2host", normalize_mac(value)) + submit_host_info_job(value) unless host + when "container" + host = value + end + + get_job_boot(host, boot_type) + end + + # auto submit a job to collect the host information + # grub hostname is link with ":", like "00:01:02:03:04:05" + # remind: if like with "-", last "-05" is treated as host number + # then hostname will be "sut-00-01-02-03-04" !!! + def submit_host_info_job(mac) + host = "sut-#{mac}" + set_host_mac(mac, host) + + Jobfile::Operate.auto_submit_job( + "#{ENV["LKP_SRC"]}/jobs/host-info.yaml", + "testbox: #{host}") + end + + def rand_queues(queues) + return queues if queues.empty? + + queues_size = queues.size + base = Random.rand(queues_size) + temp_queues = [] of String + + (0..queues_size - 1).each do |index| + temp_queues << queues[(index + base) % queues_size] + end + + return temp_queues + end + + def get_queues(host) + queues = [] of String + + queues_str = @redis.hash_get("sched/host2queues", host) + return queues unless queues_str + + queues_str.split(',', remove_empty: true) do |item| + queues << item.strip + end + + return rand_queues(queues) + end + + def get_job_from_queues(queues, testbox) + job = nil + + queues.each do |queue| + job = prepare_job("sched/#{queue}", testbox) + return job if job + end + + return job + end + + def get_job_boot(host, boot_type) + queues = get_queues(host) + job = get_job_from_queues(queues, host) + + if job + Jobfile::Operate.create_job_cpio(job.dump_to_json_any, Kemal.config.public_folder) + end + + return boot_content(job, boot_type) + end + + private def ipxe_msg(msg) + "#!ipxe + echo ... + echo #{msg} + echo ... + reboot" + end + + private def grub_msg(msg) + "#!grub + echo ... + echo #{msg} + echo ... + reboot" + end + + private def get_boot_container(job : Job) + response = Hash(String, String).new + response["docker_image"] = "#{job.docker_image}" + response["lkp"] = "http://#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT}" + + JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/lkp-#{job.arch}.cgz") + response["job"] = "http://#{SCHED_HOST}:#{SCHED_PORT}/job_initrd_tmpfs/#{job.id}/job.cgz" + + return response.to_json + end + + private def get_boot_grub(job : Job) + initrd_lkp_cgz = "lkp-#{job.os_arch}.cgz" + + response = "#!grub\n\n" + response += "linux (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" + response += "#{JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/vmlinuz")} user=lkp" + response += " job=/lkp/scheduled/job.yaml RESULT_ROOT=/result/job" + response += " rootovl ip=dhcp ro root=#{job.kernel_append_root}\n" + + response += "initrd (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" + response += JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/initrd.lkp") + response += " (http,#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT})" + response += JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/#{initrd_lkp_cgz}") + response += " (http,#{SCHED_HOST}:#{SCHED_PORT})/job_initrd_tmpfs/" + response += "#{job.id}/job.cgz\n" + + response += "boot\n" + + return response + end + + def touch_access_key_file(job : Job) + FileUtils.touch(job.access_key_file) + end + + private def get_boot_ipxe(job : Job) + response = "#!ipxe\n\n" + response += job.initrds_uri + response += job.kernel_uri + response += job.kernel_params + response += "\nboot\n" + + return response + end + + def boot_content(job : Job | Nil, boot_type : String) + touch_access_key_file(job) if job + + case boot_type + when "ipxe" + return job ? get_boot_ipxe(job) : ipxe_msg("No job now") + when "grub" + return job ? get_boot_grub(job) : grub_msg("No job now") + when "container" + return job ? get_boot_container(job) : Hash(String, String).new.to_json + else + raise "Not defined boot type #{boot_type}" + end + end + + private def find_job(testbox : String, count = 1) + tbox_group = JobHelper.match_tbox_group(testbox) + tbox = tbox_group.partition("--")[0] + + queue_list = query_consumable_keys(tbox) + + boxes = ["sched/" + testbox, + "sched/" + tbox_group, + "sched/" + tbox, + "sched/" + tbox_group + "/idle"] + boxes.each do |box| + next if queue_list.select(box).size == 0 + count.times do + job = prepare_job(box, testbox) + return job if job + + sleep(1) unless count == 1 + end + end + + # when find no job, auto submit idle job at background + spawn { auto_submit_idle_job(tbox_group) } + + return nil + end + + private def prepare_job(queue_name, testbox) + response = @task_queue.consume_task(queue_name) + job_id = JSON.parse(response[1].to_json)["id"] if response[0] == 200 + job = nil + + if job_id + begin + job = @es.get_job(job_id.to_s) + rescue ex + puts "Invalid job (id=#{job_id}) in es. Info: #{ex}" + puts ex.inspect_with_backtrace + end + end + + if job + job.update({"testbox" => testbox}) + job.set_result_root + puts %({"job_id": "#{job_id}", "result_root": "/srv#{job.result_root}", "job_state": "set result root"}) + @redis.set_job(job) + end + return job + end + + private def get_idle_job(tbox_group, testbox) + job = prepare_job("sched/#{tbox_group}/idle", testbox) + + # if there has no idle job, auto submit and get 1 + if job.nil? + auto_submit_idle_job(tbox_group) + job = prepare_job("sched/#{tbox_group}/idle", testbox) + end + + return job + end +end diff --git a/src/scheduler/find_next_job_boot.cr b/src/scheduler/find_next_job_boot.cr new file mode 100644 index 0000000..807a9ae --- /dev/null +++ b/src/scheduler/find_next_job_boot.cr @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def find_next_job_boot(env) + hostname = env.params.query["hostname"]? + mac = env.params.query["mac"]? + if !hostname && mac + hostname = @redis.hash_get("sched/mac2host", normalize_mac(mac)) + end + + get_job_boot(hostname, "ipxe") + end +end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] sched: refactor sched class for the find job boot function
by Cao Xueliang 09 Nov '20

09 Nov '20
According to scheduler.cr API to refactor sched class. Extract find_job_boot function from sched.cr to find_job_boot.cr Extract find_next_job_boot function from sched.cr to find_next_job_boot.cr Signed-off-by: Cao Xueliang <caoxl78320(a)163.com> --- src/lib/sched.cr | 237 +--------------------------- src/scheduler/find_job_boot.cr | 221 ++++++++++++++++++++++++++ src/scheduler/find_next_job_boot.cr | 14 ++ 3 files changed, 238 insertions(+), 234 deletions(-) create mode 100644 src/scheduler/find_job_boot.cr create mode 100644 src/scheduler/find_next_job_boot.cr diff --git a/src/lib/sched.cr b/src/lib/sched.cr index a4b12b4..6ecd95d 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -13,6 +13,9 @@ require "../scheduler/jobfile_operate" require "../scheduler/redis_client" require "../scheduler/elasticsearch_client" +require "../scheduler/find_job_boot" +require "../scheduler/find_next_job_boot" + class Sched property es property redis @@ -323,230 +326,6 @@ class Sched @es.set_job_content(job) end - private def ipxe_msg(msg) - "#!ipxe - echo ... - echo #{msg} - echo ... - reboot" - end - - private def grub_msg(msg) - "#!grub - echo ... - echo #{msg} - echo ... - reboot" - end - - private def get_boot_container(job : Job) - response = Hash(String, String).new - response["docker_image"] = "#{job.docker_image}" - response["lkp"] = "http://#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT}" + - JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/lkp-#{job.arch}.cgz") - response["job"] = "http://#{SCHED_HOST}:#{SCHED_PORT}/job_initrd_tmpfs/#{job.id}/job.cgz" - - return response.to_json - end - - private def get_boot_grub(job : Job) - initrd_lkp_cgz = "lkp-#{job.os_arch}.cgz" - - response = "#!grub\n\n" - response += "linux (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" - response += "#{JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/vmlinuz")} user=lkp" - response += " job=/lkp/scheduled/job.yaml RESULT_ROOT=/result/job" - response += " rootovl ip=dhcp ro root=#{job.kernel_append_root}\n" - - response += "initrd (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" - response += JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/initrd.lkp") - response += " (http,#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT})" - response += JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/#{initrd_lkp_cgz}") - response += " (http,#{SCHED_HOST}:#{SCHED_PORT})/job_initrd_tmpfs/" - response += "#{job.id}/job.cgz\n" - - response += "boot\n" - - return response - end - - def touch_access_key_file(job : Job) - FileUtils.touch(job.access_key_file) - end - - def boot_content(job : Job | Nil, boot_type : String) - touch_access_key_file(job) if job - - case boot_type - when "ipxe" - return job ? get_boot_ipxe(job) : ipxe_msg("No job now") - when "grub" - return job ? get_boot_grub(job) : grub_msg("No job now") - when "container" - return job ? get_boot_container(job) : Hash(String, String).new.to_json - else - raise "Not defined boot type #{boot_type}" - end - end - - def rand_queues(queues) - return queues if queues.empty? - - queues_size = queues.size - base = Random.rand(queues_size) - temp_queues = [] of String - - (0..queues_size - 1).each do |index| - temp_queues << queues[(index + base) % queues_size] - end - - return temp_queues - end - - def get_queues(host) - queues = [] of String - - queues_str = @redis.hash_get("sched/host2queues", host) - return queues unless queues_str - - queues_str.split(',', remove_empty: true) do |item| - queues << item.strip - end - - return rand_queues(queues) - end - - def get_job_from_queues(queues, testbox) - job = nil - - queues.each do |queue| - job = prepare_job("sched/#{queue}", testbox) - return job if job - end - - return job - end - - def get_job_boot(host, boot_type) - queues = get_queues(host) - job = get_job_from_queues(queues, host) - - if job - Jobfile::Operate.create_job_cpio(job.dump_to_json_any, Kemal.config.public_folder) - end - - return boot_content(job, boot_type) - end - - # auto submit a job to collect the host information - # grub hostname is link with ":", like "00:01:02:03:04:05" - # remind: if like with "-", last "-05" is treated as host number - # then hostname will be "sut-00-01-02-03-04" !!! - def submit_host_info_job(mac) - host = "sut-#{mac}" - set_host_mac(mac, host) - - Jobfile::Operate.auto_submit_job( - "#{ENV["LKP_SRC"]}/jobs/host-info.yaml", - "testbox: #{host}") - end - - def find_job_boot(env : HTTP::Server::Context) - value = env.params.url["value"] - boot_type = env.params.url["boot_type"] - - case boot_type - when "ipxe" - host = @redis.hash_get("sched/mac2host", normalize_mac(value)) - when "grub" - host = @redis.hash_get("sched/mac2host", normalize_mac(value)) - submit_host_info_job(value) unless host - when "container" - host = value - end - - get_job_boot(host, boot_type) - end - - def find_next_job_boot(env) - hostname = env.params.query["hostname"]? - mac = env.params.query["mac"]? - if !hostname && mac - hostname = @redis.hash_get("sched/mac2host", normalize_mac(mac)) - end - - get_job_boot(hostname, "ipxe") - end - - def get_testbox_boot_content(testbox, boot_type) - job = find_job(testbox) if testbox - Jobfile::Operate.create_job_cpio(job.dump_to_json_any, - Kemal.config.public_folder) if job - - return boot_content(job, boot_type) - end - - private def find_job(testbox : String, count = 1) - tbox_group = JobHelper.match_tbox_group(testbox) - tbox = tbox_group.partition("--")[0] - - queue_list = query_consumable_keys(tbox) - - boxes = ["sched/" + testbox, - "sched/" + tbox_group, - "sched/" + tbox, - "sched/" + tbox_group + "/idle"] - boxes.each do |box| - next if queue_list.select(box).size == 0 - count.times do - job = prepare_job(box, testbox) - return job if job - - sleep(1) unless count == 1 - end - end - - # when find no job, auto submit idle job at background - spawn { auto_submit_idle_job(tbox_group) } - - return nil - end - - private def prepare_job(queue_name, testbox) - response = @task_queue.consume_task(queue_name) - job_id = JSON.parse(response[1].to_json)["id"] if response[0] == 200 - job = nil - - if job_id - begin - job = @es.get_job(job_id.to_s) - rescue ex - puts "Invalid job (id=#{job_id}) in es. Info: #{ex}" - puts ex.inspect_with_backtrace - end - end - - if job - job.update({"testbox" => testbox}) - job.set_result_root - puts %({"job_id": "#{job_id}", "result_root": "/srv#{job.result_root}", "job_state": "set result root"}) - @redis.set_job(job) - end - return job - end - - private def get_idle_job(tbox_group, testbox) - job = prepare_job("sched/#{tbox_group}/idle", testbox) - - # if there has no idle job, auto submit and get 1 - if job.nil? - auto_submit_idle_job(tbox_group) - job = prepare_job("sched/#{tbox_group}/idle", testbox) - end - - return job - end - def auto_submit_idle_job(tbox_group) full_path_patterns = "#{ENV["CCI_REPOS"]}/lab-#{ENV["lab"]}/allot/idle/#{tbox_group}/*.yaml" extra_job_fields = [ @@ -561,16 +340,6 @@ class Sched extra_job_fields) if Dir.glob(full_path_patterns).size > 0 end - private def get_boot_ipxe(job : Job) - response = "#!ipxe\n\n" - response += job.initrds_uri - response += job.kernel_uri - response += job.kernel_params - response += "\nboot\n" - - return response - end - def update_job_parameter(env : HTTP::Server::Context) job_id = env.params.query["job_id"]? if !job_id diff --git a/src/scheduler/find_job_boot.cr b/src/scheduler/find_job_boot.cr new file mode 100644 index 0000000..9f08288 --- /dev/null +++ b/src/scheduler/find_job_boot.cr @@ -0,0 +1,221 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def find_job_boot(env : HTTP::Server::Context) + value = env.params.url["value"] + boot_type = env.params.url["boot_type"] + + case boot_type + when "ipxe" + host = @redis.hash_get("sched/mac2host", normalize_mac(value)) + when "grub" + host = @redis.hash_get("sched/mac2host", normalize_mac(value)) + submit_host_info_job(value) unless host + when "container" + host = value + end + + get_job_boot(host, boot_type) + end + + # auto submit a job to collect the host information + # grub hostname is link with ":", like "00:01:02:03:04:05" + # remind: if like with "-", last "-05" is treated as host number + # then hostname will be "sut-00-01-02-03-04" !!! + def submit_host_info_job(mac) + host = "sut-#{mac}" + set_host_mac(mac, host) + + Jobfile::Operate.auto_submit_job( + "#{ENV["LKP_SRC"]}/jobs/host-info.yaml", + "testbox: #{host}") + end + + def rand_queues(queues) + return queues if queues.empty? + + queues_size = queues.size + base = Random.rand(queues_size) + temp_queues = [] of String + + (0..queues_size - 1).each do |index| + temp_queues << queues[(index + base) % queues_size] + end + + return temp_queues + end + + def get_queues(host) + queues = [] of String + + queues_str = @redis.hash_get("sched/host2queues", host) + return queues unless queues_str + + queues_str.split(',', remove_empty: true) do |item| + queues << item.strip + end + + return rand_queues(queues) + end + + def get_job_from_queues(queues, testbox) + job = nil + + queues.each do |queue| + job = prepare_job("sched/#{queue}", testbox) + return job if job + end + + return job + end + + def get_job_boot(host, boot_type) + queues = get_queues(host) + job = get_job_from_queues(queues, host) + + if job + Jobfile::Operate.create_job_cpio(job.dump_to_json_any, Kemal.config.public_folder) + end + + return boot_content(job, boot_type) + end + + private def ipxe_msg(msg) + "#!ipxe + echo ... + echo #{msg} + echo ... + reboot" + end + + private def grub_msg(msg) + "#!grub + echo ... + echo #{msg} + echo ... + reboot" + end + + private def get_boot_container(job : Job) + response = Hash(String, String).new + response["docker_image"] = "#{job.docker_image}" + response["lkp"] = "http://#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT}" + + JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/lkp-#{job.arch}.cgz") + response["job"] = "http://#{SCHED_HOST}:#{SCHED_PORT}/job_initrd_tmpfs/#{job.id}/job.cgz" + + return response.to_json + end + + private def get_boot_grub(job : Job) + initrd_lkp_cgz = "lkp-#{job.os_arch}.cgz" + + response = "#!grub\n\n" + response += "linux (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" + response += "#{JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/vmlinuz")} user=lkp" + response += " job=/lkp/scheduled/job.yaml RESULT_ROOT=/result/job" + response += " rootovl ip=dhcp ro root=#{job.kernel_append_root}\n" + + response += "initrd (http,#{OS_HTTP_HOST}:#{OS_HTTP_PORT})" + response += JobHelper.service_path("#{SRV_OS}/#{job.os_dir}/initrd.lkp") + response += " (http,#{INITRD_HTTP_HOST}:#{INITRD_HTTP_PORT})" + response += JobHelper.service_path("#{SRV_INITRD}/lkp/#{job.lkp_initrd_user}/#{initrd_lkp_cgz}") + response += " (http,#{SCHED_HOST}:#{SCHED_PORT})/job_initrd_tmpfs/" + response += "#{job.id}/job.cgz\n" + + response += "boot\n" + + return response + end + + def touch_access_key_file(job : Job) + FileUtils.touch(job.access_key_file) + end + + private def get_boot_ipxe(job : Job) + response = "#!ipxe\n\n" + response += job.initrds_uri + response += job.kernel_uri + response += job.kernel_params + response += "\nboot\n" + + return response + end + + def boot_content(job : Job | Nil, boot_type : String) + touch_access_key_file(job) if job + + case boot_type + when "ipxe" + return job ? get_boot_ipxe(job) : ipxe_msg("No job now") + when "grub" + return job ? get_boot_grub(job) : grub_msg("No job now") + when "container" + return job ? get_boot_container(job) : Hash(String, String).new.to_json + else + raise "Not defined boot type #{boot_type}" + end + end + + private def find_job(testbox : String, count = 1) + tbox_group = JobHelper.match_tbox_group(testbox) + tbox = tbox_group.partition("--")[0] + + queue_list = query_consumable_keys(tbox) + + boxes = ["sched/" + testbox, + "sched/" + tbox_group, + "sched/" + tbox, + "sched/" + tbox_group + "/idle"] + boxes.each do |box| + next if queue_list.select(box).size == 0 + count.times do + job = prepare_job(box, testbox) + return job if job + + sleep(1) unless count == 1 + end + end + + # when find no job, auto submit idle job at background + spawn { auto_submit_idle_job(tbox_group) } + + return nil + end + + private def prepare_job(queue_name, testbox) + response = @task_queue.consume_task(queue_name) + job_id = JSON.parse(response[1].to_json)["id"] if response[0] == 200 + job = nil + + if job_id + begin + job = @es.get_job(job_id.to_s) + rescue ex + puts "Invalid job (id=#{job_id}) in es. Info: #{ex}" + puts ex.inspect_with_backtrace + end + end + + if job + job.update({"testbox" => testbox}) + job.set_result_root + puts %({"job_id": "#{job_id}", "result_root": "/srv#{job.result_root}", "job_state": "set result root"}) + @redis.set_job(job) + end + return job + end + + private def get_idle_job(tbox_group, testbox) + job = prepare_job("sched/#{tbox_group}/idle", testbox) + + # if there has no idle job, auto submit and get 1 + if job.nil? + auto_submit_idle_job(tbox_group) + job = prepare_job("sched/#{tbox_group}/idle", testbox) + end + + return job + end + +end diff --git a/src/scheduler/find_next_job_boot.cr b/src/scheduler/find_next_job_boot.cr new file mode 100644 index 0000000..807a9ae --- /dev/null +++ b/src/scheduler/find_next_job_boot.cr @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +class Sched + def find_next_job_boot(env) + hostname = env.params.query["hostname"]? + mac = env.params.query["mac"]? + if !hostname && mac + hostname = @redis.hash_get("sched/mac2host", normalize_mac(mac)) + end + + get_job_boot(hostname, "ipxe") + end +end -- 2.23.0
3 4
0 0
[PATCH v2 lkp-tests] [multi-qemu] support queues parameter
by Xiao Shenwei 09 Nov '20

09 Nov '20
define queues field on multi-qemu: example: multi-qemu-0: nr_vm: 20 tbox_group: vm-2p8g queues: vm-2p8g.taishan200-2280-2s48p-256g--a1 Signed-off-by: Xiao Shenwei <xiaoshenwei96(a)163.com> --- daemon/multi-qemu | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/daemon/multi-qemu b/daemon/multi-qemu index b407251f..8d9dd38d 100755 --- a/daemon/multi-qemu +++ b/daemon/multi-qemu @@ -1,6 +1,7 @@ #!/bin/sh # - tbox_group # - nr_vm +# - queues # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. @@ -8,10 +9,10 @@ multi_qemu() { export CCI_SRC=/c/compass-ci - local hostname=$tbox_group--$HOSTNAME + local hostname=$tbox_group.$HOSTNAME cd "$CACHE_DIR" - $CCI_SRC/providers/multi-qemu "$hostname" "$nr_vm" + $CCI_SRC/providers/multi-qemu -n "$hostname" -c "$nr_vm" -q "$queues" } multi_qemu -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci 1/2] src/lib/web_backend.rb: "/compare_candidates" tbox_group regex error
by Zhang Yuhang 09 Nov '20

09 Nov '20
[error info] 1. /\d+$/ will be matched, such as "xxx123". 2. `index = "xxx123".index('--') || "xxx123".rindex('-')` - index equal nil. 3. `r = r[0, nil]` - raise a TypeError. Signed-off-by: Zhang Yuhang <zhangyuhang25(a)huawei.com> --- src/lib/web_backend.rb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/lib/web_backend.rb b/src/lib/web_backend.rb index c3f6bef..55c5e4a 100644 --- a/src/lib/web_backend.rb +++ b/src/lib/web_backend.rb @@ -47,7 +47,7 @@ end def filter_tbox_group(es_result) result = Set.new es_result.each do |r| - if r =~ /(^.+--.+$)|(^vm-.*-\d\w*-([a-zA-Z]+)|(\d+)$)/ + if r =~ /(^.+--.+$)|(^vm-.*-\d\w*-(([a-zA-Z]+)|(\d+))$)/ index = r.index('--') || r.rindex('-') r = r[0, index] end -- 2.23.0
1 0
0 0
[PATCH v2 lab-z9] cluster: add cluster config file
by Zhang Yu 09 Nov '20

09 Nov '20
About config filename: "cs-s1-a102-c1": - cs : cluster - s1 : one server node - a102 : server node name - c1 : one client node Signed-off-by: Zhang Yu <2134782174(a)qq.com> --- cluster/cs-s1-a102-c1 | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 cluster/cs-s1-a102-c1 diff --git a/cluster/cs-s1-a102-c1 b/cluster/cs-s1-a102-c1 new file mode 100644 index 0000000..af57123 --- /dev/null +++ b/cluster/cs-s1-a102-c1 @@ -0,0 +1,10 @@ +switch: Switch-P10 +ip0: 1 +nodes: + taishan200-2280-2s48p-512g--a102: + roles: [ server ] + macs: [ "44:67:47:c9:ea:37" ] + + taishan200-2280-2s48p-256g--a101: + roles: [ client ] + macs: [ "44:67:47:d7:6c:a3" ] -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci] sched: simplify cluster state updating
by Ren Wen 09 Nov '20

09 Nov '20
There are three steps when updating cluster state: 1) get cluster state from redis. 2) update cluster state. 3) rewrite to redis. Before: write to redis one job info once. After: write to redis more job info once. It will save time when writing more than one job info once. Signed-off-by: Ren Wen <15991987063(a)163.com> --- src/lib/sched.cr | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/src/lib/sched.cr b/src/lib/sched.cr index a4b12b4..bff6090 100644 --- a/src/lib/sched.cr +++ b/src/lib/sched.cr @@ -58,11 +58,11 @@ class Sched return cluster_state end - # get -> modify -> set - def update_cluster_state(cluster_id, job_id, property, value) + # Update job info according to cluster id. + def update_cluster_state(cluster_id, job_id, job_info : Hash(String, String)) cluster_state = get_cluster_state(cluster_id) if cluster_state[job_id]? - cluster_state[job_id].merge!({property => value}) + cluster_state[job_id].merge!(job_info) @redis.hash_set("sched/cluster_state", cluster_id, cluster_state.to_json) end end @@ -86,9 +86,9 @@ class Sched case request_state when "abort", "finished", "failed" # update node state only - update_cluster_state(cluster_id, job_id, "state", states[request_state]) + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) when "wait_ready" - update_cluster_state(cluster_id, job_id, "state", states[request_state]) + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) @block_helper.block_until_finished(cluster_id) { cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) cluster_state == "ready" || cluster_state == "abort" @@ -96,7 +96,7 @@ class Sched return cluster_state when "wait_finish" - update_cluster_state(cluster_id, job_id, "state", states[request_state]) + update_cluster_state(cluster_id, job_id, {"state" => states[request_state]}) while 1 sleep(10) cluster_state = sync_cluster_state(cluster_id, job_id, states[request_state]) @@ -110,10 +110,11 @@ class Sched direct_ips = env.params.query["direct_ips"] direct_macs = env.params.query["direct_macs"] - update_cluster_state(cluster_id, job_id, "roles", node_roles) - update_cluster_state(cluster_id, job_id, "ip", node_ip) - update_cluster_state(cluster_id, job_id, "direct_ips", direct_ips) - update_cluster_state(cluster_id, job_id, "direct_macs", direct_macs) + job_info = {"roles" => node_roles, + "ip" => node_ip, + "direct_ips" => direct_ips, + "direct_macs" => direct_macs} + update_cluster_state(cluster_id, job_id, job_info) when "roles_ip" role = "server" role_state = get_role_state(cluster_id, role) -- 2.23.0
1 0
0 0
[PATCH v7 lkp-tests 1/2] jobs/iozone-bs.yaml: combine iozone's multiple -i parameter to single
by Lu Kaiyi 07 Nov '20

07 Nov '20
[why] avoid explosion of parameter for iozone-bs.yaml Signed-off-by: Lu Kaiyi <2392863668(a)qq.com> --- jobs/iozone-bs.yaml | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/jobs/iozone-bs.yaml b/jobs/iozone-bs.yaml index e2cd9f48..53f1ac46 100644 --- a/jobs/iozone-bs.yaml +++ b/jobs/iozone-bs.yaml @@ -2,9 +2,7 @@ suite: iozone category: benchmark file_size: 4g -write_rewrite: true -read_reread: true -random_read_write: true +test: write, read, rand_rw block_size: - 64k -- 2.23.0
2 1
0 0
[PATCH v7 lkp-tests 2/2] tests/iozone: change the way of parsing parameter
by Lu Kaiyi 07 Nov '20

07 Nov '20
[why] iozone-bs.yaml has combined multiple parameter to single, so, iozone tool also need change the way of parsing parameter. [how] change the way of parsing parameter. small example for parsing parameter: test=" write, read, rand_rw" OLD_IFS="$IFS" IFS="," array=($test) IFS="$OLD_IFS" for ele in ${array[@]} do case $ele in "write") echo $ele ;; "read") echo $ele ;; "rand_rw") echo $ele ;; esac done output: ~/workspace/shell_learning% ./test.sh write read rand_rw if use "${array[@]}" in for loop, there will be no output. Signed-off-by: Lu Kaiyi <2392863668(a)qq.com> --- tests/iozone | 66 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 28 deletions(-) diff --git a/tests/iozone b/tests/iozone index 88a92a18..b6a1e206 100755 --- a/tests/iozone +++ b/tests/iozone @@ -1,42 +1,52 @@ #!/bin/sh # - block_size # - file_size -# - write_rewrite -# - read_reread -# - random_read_write -# - read_backwards -# - rewrite_record -# - stride_read -# - fwrite_refwrite -# - fread_refread -# - random_mix -# - pwrite_repwrite -# - pread_repread -# - pwritev_repwritev -# - preadv_repreadv +# - test ## IOzone is a filesystem benchmark tool. The benchmark generates ## and measures a variety of file operations. . $LKP_SRC/lib/reproduce-log.sh - args="iozone" if [ -n "$block_size" ]; then args+=" -r $block_size" - [ -n "$file_size" ] && args+=" -s $file_size" - [ -n "$write_rewrite" ] && args+=" -i 0" - [ -n "$read_reread" ] && args+=" -i 1" - [ -n "$random_read_write" ] && args+=" -i 2" - [ -n "$read_backwards" ] && args+=" -i 3" - [ -n "$rewrite_record" ] && args+=" -i 4" - [ -n "$stride_read" ] && args+=" -i 5" - [ -n "$fwrite_refwrite" ] && args+=" -i 6" - [ -n "$fread_refread" ] && args+=" -i 7" - [ -n "$random_mix" ] && args+=" -i 8" - [ -n "$pwrite_repwrite" ] && args+=" -i 9" - [ -n "$pread_repread" ] && args+=" -i 10" - [ -n "$pwritev_repwritev" ] && args+=" -i 11" - [ -n "$preadv_repreadv" ] && args+=" -i 12" + [ -n "$file_size" ] && args+=" -s $file_size" + OLD_IFS="$IFS" + # reset IFS to split $test by "," and then restore default + IFS="," + array=($test) + IFS="$OLD_IFS" + for ele in ${array[@]} + do + case $ele in + "write") args+=" -i 0" + ;; + "read") args+=" -i 1" + ;; + "rand_rw") args+=" -i 2" + ;; + "backwards") args+=" -i 3" + ;; + "record") args+=" -i 4" + ;; + "stride") args+=" -i 5" + ;; + "fwrite") args+=" -i 6" + ;; + "fread") args+=" -i 7" + ;; + "rand_mix") args+=" -i 8" + ;; + "pwrite") args+=" -i 9" + ;; + "pread") args+=" -i 10" + ;; + "pwritev") args+=" -i 11" + ;; + "preadv") args+=" -i 12" + ;; + esac + done else args+=" -a" fi -- 2.23.0
2 1
0 0
[PATCH v2 compass-ci 07/10] container/mail-robot: Dockerfile
by Luan Shengde 07 Nov '20

07 Nov '20
Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/mail-robot/Dockerfile | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 container/mail-robot/Dockerfile diff --git a/container/mail-robot/Dockerfile b/container/mail-robot/Dockerfile new file mode 100644 index 0000000..4b660c1 --- /dev/null +++ b/container/mail-robot/Dockerfile @@ -0,0 +1,12 @@ +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. + +FROM debian + +MAINTAINER shdluan(a)163.com + +ENV DEBIAN_FRONTEND noninteractive + +RUN apt-get update && \ + apt-get install -y git uuid-runtime curl ruby-listen ruby-json ruby-mail && \ + gem install fileutils elasticsearch activesupport -- 2.23.0
2 4
0 0
  • ← Newer
  • 1
  • ...
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • ...
  • 210
  • Older →

HyperKitty Powered by HyperKitty