mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 2 participants
  • 5237 discussions
[PATCH lkp-tests] bin/lkp-setup-rootfs: fix mount error
by Li Ping 10 Nov '20

10 Nov '20
[problem]: submit -a cci-makepkg.yaml testbox=taishan200-2280-2s64p-256g--a42 cat /srv/result/cci-makepkg/taishan200-2280-2s64p-256g--a42/2020-11-05/z9.146070/output mount error(16): Device or resource busy Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Failed to run mount /lkp/lkp/src/lib/debug.sh:12: die /lkp/lkp/src/tests/cci-makepkg:37: main [why]: submit with the option '-a' will call the define_files function and add test-user change files to job's define_files. Then define_files_aoto_pack will be triggered. When submit -a cci-makepkg.yaml, $LKP_SRC/tests/cci-makepkg will run twice, which results in mounting twice and causes mount error. Signed-off-by: Li Ping <15396232681(a)163.com> --- bin/lkp-setup-rootfs | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/bin/lkp-setup-rootfs b/bin/lkp-setup-rootfs index 680027e6..83029f09 100755 --- a/bin/lkp-setup-rootfs +++ b/bin/lkp-setup-rootfs @@ -251,7 +251,9 @@ chmod_root_ssh while true; do set_tbox_wtmp 'running' . $job_script define_files - define_files_auto_pack $define_files + [ "$suite" != "cci-makepkg" ] && [ "$suite" != "cci-depends" ] && { + define_files_auto_pack $define_files + } if [ "$os_mount" != "initramfs" ]; then wget_install_cgz $initrd_deps wget_install_cgz $initrd_pkg -- 2.23.0
3 3
0 0
[PATCH lkp-tests] tests/*: fix double mount problem
by Wang Yong 10 Nov '20

10 Nov '20
[Why] when use submit -a and run cci-makepkg/cci-depends job, lkp will call cci-makepkg/cci-depends first and destination dir will mount twice when job is running the second time [How] check mountpoint before mount destination dir Signed-off-by: Wang Yong <wangyong0117(a)qq.com> --- tests/cci-depends | 2 ++ tests/cci-makepkg | 2 ++ 2 files changed, 4 insertions(+) diff --git a/tests/cci-depends b/tests/cci-depends index efa9a025b..4b885dc0c 100755 --- a/tests/cci-depends +++ b/tests/cci-depends @@ -7,6 +7,7 @@ . $LKP_SRC/lib/debug.sh . $LKP_SRC/lib/misc-base.sh +. $LKP_SRC/lib/mount.sh [ -n "$benchmark" ] || die "benchmark is empty" [ -n "$os_mount" ] || die "os_mount is empty" @@ -28,6 +29,7 @@ pack_arch=$os_arch . $LKP_SRC/distro/$DISTRO +is_mount_point ${DEPS_MNT} && umount ${DEPS_MNT} mount -t cifs -o guest,vers=1.0,noacl,nouser_xattr //$LKP_SERVER$DEPS_MNT $DEPS_MNT || die "Failed to run mount" umask 002 diff --git a/tests/cci-makepkg b/tests/cci-makepkg index 9551ebfe2..a105f1895 100755 --- a/tests/cci-makepkg +++ b/tests/cci-makepkg @@ -13,6 +13,7 @@ . $LKP_SRC/lib/debug.sh . $LKP_SRC/lib/misc-base.sh . $LKP_SRC/lib/env.sh +. $LKP_SRC/lib/mount.sh [ -n "$benchmark" ] || die "benchmark is empty" [ -n "$os_mount" ] || die "os_mount is empty" @@ -34,6 +35,7 @@ pack_to=${os_mount}/${os}/${os_arch}/${os_version}/${benchmark} cd $LKP_SRC/pkg/$benchmark || die "pkg is empty" [ -n "$LKP_SERVER" ] && { + is_mount_point ${PKG_MNT} && umount ${PKG_MNT} mount -t cifs -o guest,vers=1.0,noacl,nouser_xattr //$LKP_SERVER$PKG_MNT $PKG_MNT || die "Failed to run mount" } -- 2.23.0
2 1
0 0
[PATCH compass-ci 2/3] [multi-docker] support queues parameter
by Xiao Shenwei 09 Nov '20

09 Nov '20
register host2queues and requested specified queues Signed-off-by: Xiao Shenwei <xiaoshenwei96(a)163.com> --- providers/docker/docker.rb | 47 ++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 17 deletions(-) diff --git a/providers/docker/docker.rb b/providers/docker/docker.rb index 4e6bc2f..8e489c6 100755 --- a/providers/docker/docker.rb +++ b/providers/docker/docker.rb @@ -11,15 +11,26 @@ require_relative '../../container/defconfig' BASE_DIR = '/srv/dc' +names = Set.new %w[ + SCHED_HOST + SCHED_PORT +] +defaults = relevant_defaults(names) +SCHED_HOST = defaults['SCHED_HOST'] || '172.17.0.1' +SCHED_PORT = defaults['SCHED_PORT'] || 3000 + def get_url(hostname) - names = Set.new %w[ - SCHED_HOST - SCHED_PORT - ] - defaults = relevant_defaults(names) - host = defaults['SCHED_HOST'] || '172.17.0.1' - port = defaults['SCHED_PORT'] || 3000 - "http://#{host}:#{port}/boot.container/hostname/#{hostname}" + "http://#{SCHED_HOST}:#{SCHED_PORT}/boot.container/hostname/#{hostname}" +end + +def set_host2queues(hostname, queues) + cmd = "curl -X PUT 'http://#{SCHED_HOST}:#{SCHED_PORT}/set_host2queues?host=#{hostname}&queues=#{queues}'" + system cmd +end + +def del_host2queues(hostname) + cmd = "curl -X PUT 'http://#{SCHED_HOST}:#{SCHED_PORT}/del_host2queues?host=#{hostname}'" + system cmd end def parse_response(url) @@ -67,7 +78,7 @@ def load_initrds(load_path, hash) wget_cmd(load_path, lkp_url, "lkp-#{arch}.cgz") end -def run(hostname, load_path, hash) +def start_container(hostname, load_path, hash) docker_image = hash['docker_image'] system "#{ENV['CCI_SRC']}/sbin/docker-pull #{docker_image}" system( @@ -77,24 +88,26 @@ def run(hostname, load_path, hash) clean_dir(load_path) end -def main(hostname) +def main(hostname, queues) + set_host2queues(hostname, queues) url = get_url hostname puts url hash = parse_response url - return if hash.nil? + return del_host2queues(hostname) if hash.nil? load_path = build_load_path(hostname) load_initrds(load_path, hash) - run(hostname, load_path, hash) + start_container(hostname, load_path, hash) + del_host2queues(hostname) end -def loop_main(hostname) +def loop_main(hostname, queues) loop do begin - main(hostname) + main(hostname, queues) rescue StandardError => e puts e.backtrace - # if an exception happend, request the next time after 30 seconds + # if an exception occurs, request the next time after 30 seconds sleep 25 ensure sleep 5 @@ -109,11 +122,11 @@ def save_pid(pids) f.close end -def multi_docker(hostname, nr_container) +def multi_docker(hostname, nr_container, queues) pids = [] nr_container.to_i.times do |i| pid = Process.fork do - loop_main("#{hostname}-#{i}") + loop_main("#{hostname}-#{i}", queues) end pids << pid end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] container: fix failed to build kibana images
by Liu Yinsi 09 Nov '20

09 Nov '20
[why] when build kibana images in x86_machine, error message: [root@localhost kibana]# ./build Sending build context to Docker daemon 5.12kB Step 1/3 : FROM gagara/kibana-oss-arm64:7.6.2 7.6.2: Pulling from gagara/kibana-oss-arm64 38163f410fa0: Pull complete 69a4d016f221: Pull complete 95e6c6e7c9ca: Pull complete d13f429dd982: Pull complete 508bb3330fb2: Pull complete 9634e726f1b6: Pull complete 9c26c37850c8: Pull complete 0d0ad8467060: Pull complete 940f92726f8b: Pull complete Digest: sha256:541632b7e9780a007f8a8be82ac8853ddcebcb04a596c00500b73f77eacfbd16 Status: Downloaded newer image for gagara/kibana-oss-arm64:7.6.2 ---> f482a0472f78 Step 2/3 : MAINTAINER Wu Zhende <wuzhende666(a)163.com> ---> Running in cfa86d8ce976 Removing intermediate container cfa86d8ce976 ---> 3be6c5f24d4b Step 3/3 : RUN sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml ---> Running in ff455f66df8b standard_init_linux.go:220: exec user process caused "exec format error" libcontainer: container start initialization failed: standard_init_linux.go:220: exec user process caused "exec format error" The command '/bin/sh -c sed -i 's/server.host: "0"/server.host: "0.0.0.0"/' config/kibana.yml' returned a non-zero code: 1 because arm base image not support to build in x86 machine. [how] 1. use images dict to store arm and x86 base images 2. use $(arch) to choose base image according to different system architecture Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- container/kibana/Dockerfile | 4 +++- container/kibana/build | 8 +++++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/container/kibana/Dockerfile b/container/kibana/Dockerfile index 35802fe..6e0dba0 100644 --- a/container/kibana/Dockerfile +++ b/container/kibana/Dockerfile @@ -1,7 +1,9 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM gagara/kibana-oss-arm64:7.6.2 +ARG BASE_IMAGE + +FROM BASE_IMAGE # docker image borrowed from hub.docker.com/r/gagara/kibana-oss-arm64 diff --git a/container/kibana/build b/container/kibana/build index a7e4717..52d5a2a 100755 --- a/container/kibana/build +++ b/container/kibana/build @@ -3,4 +3,10 @@ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. # frozen_string_literal: true -system 'docker build -t kibana:7.6.2 .' +BASE_IMAGE_DICT = { + 'aarch64' => 'gagara/kibana-oss-arm64:7.6.2', + 'x86_64' => 'kibana:7.6.2' }.freeze + +BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] + +system "docker build -t kibana:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." -- 2.23.0
1 0
0 0
[PATCH v1 lkp-tests] tests: adapt the special version of sysbench test
by Zhang Yu 09 Nov '20

09 Nov '20
[why] Use the special version of sysbench test mysql, use different test file are required [how] Add parameters to test, and use different test file of sysbench Signed-off-by: Zhang Yu <2134782174(a)qq.com> --- tests/sysbench-mysql | 79 +++++++++++++++++++++++++++++++------------- 1 file changed, 56 insertions(+), 23 deletions(-) diff --git a/tests/sysbench-mysql b/tests/sysbench-mysql index 4ab5b453..a4d79550 100755 --- a/tests/sysbench-mysql +++ b/tests/sysbench-mysql @@ -1,52 +1,85 @@ #!/bin/sh -# mysql_user -# mysql_host -# mysql_port -# mysql_db -# db_driver # - oltp_test_mode # - oltp_tables_count # - oltp_table_size +# - max_requests +# - mysql_table_engine # - nr_threads +# - rand_type +# - rand_spec_pct # - runtime # - report_interval : "${mysql_user:=root}" -: "${mysql_host:=localhost}" -: "${mysql_port:=3306}" -: "${mysql_db:=test_1000}" +: "${mysql_host:=$direct_server_ips}" +: "${mysql_port:=$mysql_port}" +: "${mysql_db:=sysbench_1}" +: "${mysql_password:=$mysql_password}" : "${db_driver:=mysql}" -: "${oltp_tables_count:=3}" -: "${oltp_table_size:=1000}" -: "${nr_threads:=64}" -: "${runtime:=120}" -: "${report_interval:=10}" +: "${oltp_test_mode:=complex}" +: "${oltp_tables_count:=1000}" +: "${oltp_table_size:=100000}" +: "${max_requests:=0}" +: "${mysql_table_engine:=innodb}" +: "${rand_type:=special}" +: "${rand_spec_pct:=100}" +: "${nr_threads:=256}" +: "${runtime:=600}" +: "${report_interval:=1}" -args=( +args1=( --mysql-user=$mysql_user --mysql-host=$mysql_host --mysql-port=$mysql_port --mysql-db=$mysql_db + --oltp-test-mode=$oltp_test_mode --db-driver=$db_driver - --tables=$oltp_tables_count - --table-size=$oltp_table_size + --mysql-password=$mysql_password + --max-requests=$max_requests + --mysql-table-engine=$mysql_table_engine + --oltp-table-size=$oltp_table_size + --oltp-tables-count=$oltp_tables_count + --rand-type=$rand_type + --rand-spec-pct=$rand_spec_pct --threads=$nr_threads --time=$runtime - --report-interval=$report_interval ) +args2=( + --mysql-user=$mysql_user + --mysql-password=$mysql_password + --mysql-host=$mysql_host + --mysql-port=$mysql_port + --mysql-db=$mysql_db + --threads=$nr_threads + --oltp-read-only=$oltp_read_only + --oltp-table-size=$oltp_table_size + --oltp-tables-count=$oltp_tables_count + --report-interval=$report_interval + --time=$runtime + --events=$events +) +stop_firewalld() +{ + systemctl stop firewalld + iptables -F +} + run_sysbench_step() { - sysbench /usr/share/sysbench/$1 "${args[@]}" $2 + lua_script=$1 + shift + sysbench /usr/local/share/sysbench/tests/include/oltp_legacy/$lua_script "$@" + } run_sysbench_mysql() { - systemctl start mysqld - mysql -e "create database test_1000;" - run_sysbench_step oltp_common.lua prepare - run_sysbench_step oltp_read_write.lua run - run_sysbench_step oltp_common.lua cleanup + run_sysbench_step parallel_prepare.lua ${args1[@]} prepare + run_sysbench_step oltp.lua ${args2[@]} run + run_sysbench_step oltp.lua ${args2[@]} cleanup } +stop_firewalld run_sysbench_mysql + -- 2.23.0
1 0
0 0
Re: [PATCH compass-ci] container: fix failed to build kibana images
by Liu Yinsi 09 Nov '20

09 Nov '20
>> >> # docker image borrowed from hub.docker.com/r/gagara/kibana-oss-arm64 >> >>diff --git a/container/kibana/build b/container/kibana/build >>index a7e4717..60fdea2 100755 >>--- a/container/kibana/build >>+++ b/container/kibana/build >>@@ -3,4 +3,10 @@ >> # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. >> # frozen_string_literal: true >> >>-system 'docker build -t kibana:7.6.2 .' >>+BASE_IMAGE_DICT = {'aarch64'=>'gagara/kibana-oss-arm64:7.6.2', >>+ 'x86_64'=>'kibana:7.6.2' > >add space surround '=>' > >cat the hash in multi lines, write it as: >BASE_IMAGE_DICT = { > 'aarch64' => 'gagara/kibana-oss-arm64:7.6.2', > 'x86_64' => 'kibana:7.6.2' >} good Thanks, Yinsi > >Thanks, >Luan Shengde > >>+} >>+ >>+BASE_IMAGE = BASE_IMAGE_DICT[%x(arch).chomp] >>+ >>+system "docker build -t kibana:7.6.2 --build-arg BASE_IMAGE=#{BASE_IMAGE} ." >>-- >>2.23.0 >>
1 0
0 0
[PATCH v3 compass-ci 04/11] mail-robot: parse commit url and pubkey
by Luan Shengde 09 Nov '20

09 Nov '20
parse commit url and pubkey parse commit url check commit url exists check base url in upstream-repos check commit available hub: gitee.com clone the repo and check the commit exists hub: non-gitee.com check the feedback of curl command to check the commit url available parse pubkey extract the attachment for pubkey check pubkey available Signed-off-by: Luan Shengde <shdluan(a)163.com> --- lib/parse-apply-account-email.rb | 142 +++++++++++++++++++++++++++++++ 1 file changed, 142 insertions(+) create mode 100755 lib/parse-apply-account-email.rb diff --git a/lib/parse-apply-account-email.rb b/lib/parse-apply-account-email.rb new file mode 100755 index 0000000..703f3f9 --- /dev/null +++ b/lib/parse-apply-account-email.rb @@ -0,0 +1,142 @@ +#!/usr/bin/env ruby +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# frozen_string_literal: true + +require 'json' +require 'mail' + +# check whether there is a commit url and ssh pub_key in the email +# apply account on junper server +# the entry point is parse_commit_url_pub_key +class ParseApplyAccountEmail + def initialize(mail_content) + @mail_content = mail_content + + @my_info = { + 'my_email' => mail_content.from[0], + 'my_name' => mail_content.From.unparsed_value.gsub(/ <[^<>]*>/, '') + } + end + + def extract_commit_url + mail_content_body = @mail_content.part[0].part[0].body.decoded || @mail_content.part[0].body.decoded + mail_content_line = mail_content_body.gsub(/\n/, '') + + # the commit url should be headed with a prefix: my oss commit + # the commit url must be in a standart format, like: + # https://github.com/torvalds/aalinux/commit/7be74942f184fdfba34ddd19a0d995de… + no_commit_url unless mail_content_line.match?(%r{my oss commit:\s*https?://[^/]*/[^/]*/[^/]*/commit/[\w\d]{40}}) + + mail_content_body.match(%r{https?://[^/]*/[^/]*/[^/]*/commit/[\w\d]{40}})[0] + end + + def no_commit_url + error_message = "No matched commit url found.\n" + error_message += "Ensure that you have add a right commit url with prefix 'my oss commit:'." + + raise error_message + end + + def parse_commit_url + url = extract_commit_url + base_url = url.gsub(%r{/commit/[\w\d]{40}$}, '') + + base_url_in_upstream_repos('/c/upstream-repos', base_url) + commit_url_availability(url, base_url) + + return url + end + + def base_url_in_upstream_repos(upstream_dir, base_url) + Dir.chdir(upstream_dir) + match_out = %x(grep -rn #{base_url}) + + return unless match_out.empty? + + error_message = "The repo url for your commit is not in our upstream-repo list.\n" + error_message += 'Use a new one, or consulting the manager for available repo list.' + + raise error_message + end + + def commit_url_availability(url, base_url) + hub_name = url.split('/')[2] + + # it requires authentication when execute curl to get the commit infomation + # clone the repo and then validate the commit for the email address + if hub_name.eql? 'gitee.com' + gitee_commit(url, base_url) + else + non_gitee_commit(url) + end + end + + def gitee_commit(url, base_url) + my_gitee_commit = GiteeCommitUrl.new(@my_info, url, base_url) + my_gitee_commit.gitee_commit_index + end + + def non_gitee_commit(url) + url_fdback = %x(curl #{url}) + email_index = url_fdback.index @my_info['my_email'] + + return unless email_index.nil? + + error_message = "We can not conform the commit url matches your email.\n" + error_message += 'Make sure that the commit url is right,' + error_message += ' or it is truely submitted with you email.' + + raise error_message + end + + def parse_pub_key + pub_key = @mail_content.part[1].body.decoded if @mail_content.part[1].filename == 'id_rsa.pub' + pub_key_exist(pub_key) + + return pub_key + end + + def pub_key_exist(pub_key) + return unless pub_key.nil? + + error_message = "No pub_key found.\n" + error_message += 'Please add the pub_key as an attachment with filename: id_rsa.pub.' + + raise error_message + end +end + +# check commit url availability for hub gitee.com +class GiteeCommitUrl + def initialize(my_info, url, base_url) + @my_info = my_info + @url = url + @base_url = base_url + end + + def gitee_commit_index + repo_dir = @url.split('/')[-3] + repo_url = [@base_url, 'git'].join('.') + commit_id = @url.split('/')[-1] + + Dir.chdir '/tmp' + %x(/usr/bin/git clone #{repo_url} #{repo_dir}) + + email_index = %x(/usr/bin/git -C #{repo_dir} show #{commit_id}).index @my_info['my_email'] + + FileUtils.rm_rf repo_dir + + gitee_commit_exist(email_index) + end + + def gitee_commit_exist(email_index) + return unless email_index.nil? + + error_message = "We can not conform the commit url matches your email.\n" + error_message += 'Make sure that the commit url is right,' + error_message += ' or it is truely submitted with you email.' + + raise error_message + end +end -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci 1/6] git_mirror: add webhook handler
by Li Yuanchao 09 Nov '20

09 Nov '20
webhook will send a message when user push. try to find the git_repo when get url of the repository. if found, push it to queue. if not found, it maybe an illegal request, so do nothing Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- lib/git_mirror.rb | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/lib/git_mirror.rb b/lib/git_mirror.rb index 65d6940..5332c16 100644 --- a/lib/git_mirror.rb +++ b/lib/git_mirror.rb @@ -88,10 +88,16 @@ class MirrorMain @git_queue = Queue.new @es_client = Elasticsearch::Client.new(url: "http://#{ES_HOST}:#{ES_PORT}") load_fork_info + connection_init + handle_webhook + end + + def connection_init connection = Bunny.new('amqp://172.17.0.1:5672') connection.start channel = connection.create_channel @message_queue = channel.queue('new_refs') + @webhook_queue = channel.queue('web_hook') end def fork_stat_init(stat_key) @@ -287,3 +293,40 @@ class MirrorMain es_repo_update(git_repo) end end + +# main thread +class MirrorMain + def check_git_repo(git_repo, webhook_url) + return @git_info.key?(git_repo) && Array(@git_info[git_repo]['url'])[0] == webhook_url + end + # example + # url: https://gitee.com/theprocess/oec-hardware git_repo: oec-hardware/oec-hardware + # url: https://github.com/berkeley-abc/abc git_repo: a/abc/abc + # url: https://github.com/Siguyi/AvxToNeon git_repo: AvxToNeon/Siguyi + def get_git_repo(webhook_url) + strings = webhook_url.split('/') + project = strings[-1] + fork_name = strings[-2] + + git_repo = "#{project}/#{project}" + return git_repo if check_git_repo(git_repo, webhook_url) + + git_repo = "#{project}/#{fork_name}" + return git_repo if check_git_repo(git_repo, webhook_url) + + git_repo = "#{project[0]}/#{project}/#{project}" + return git_repo if check_git_repo(git_repo, webhook_url) + + puts "webhook: #{webhook_url} is not found!" + end + + def handle_webhook + Thread.new do + @webhook_queue.subscribe(block: true) do |_delivery, _properties, webhook_url| + git_repo = get_git_repo(webhook_url) + do_push(git_repo) if git_repo + sleep(0.1) + end + end + end +end -- 2.23.0
4 5
0 0
[PATCH compass-ci] service/monitoring: fix can't delete closed ws
by Wu Zhende 09 Nov '20

09 Nov '20
Use a hash to save the mapping between "query" and "ws". When call function "add_filter_rule" will change the "query". If query={"job_id": "1"}, will change to "query={"job_id": ["1"]}". So need to use the modified query to delete the key-value pair in the hash. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/monitoring/filter.cr | 2 ++ src/monitoring/monitoring.cr | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/src/monitoring/filter.cr b/src/monitoring/filter.cr index 8e672e0..5d3bc8b 100644 --- a/src/monitoring/filter.cr +++ b/src/monitoring/filter.cr @@ -16,6 +16,8 @@ class Filter @hash[query] = Array(HTTP::WebSocket).new unless @hash[query]? @hash[query] << socket + + return query end private def convert_hash_value_to_array(query) diff --git a/src/monitoring/monitoring.cr b/src/monitoring/monitoring.cr index dff1902..d09c363 100644 --- a/src/monitoring/monitoring.cr +++ b/src/monitoring/monitoring.cr @@ -19,7 +19,7 @@ module Monitoring # also can be {"job_id": ["1", "2"]} query = JSON.parse(msg) if query.as_h? - filter.add_filter_rule(query, socket) + query = filter.add_filter_rule(query, socket) end end -- 2.23.0
1 0
0 0
[PATCH compass-ci] kernel_params: add kernel custom params
by Wei Jihui 09 Nov '20

09 Nov '20
[Why] user maybe use custom kernel params to setup OS input: (job yaml) kernel_custom_params: "sched_steal_node_limit=8" output: (setup log) ... ro sched_steal_node_limit=8 rdinit=/sbin/init... Signed-off-by: Wei Jihui <weijihuiall(a)163.com> --- src/scheduler/kernel_params.cr | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/scheduler/kernel_params.cr b/src/scheduler/kernel_params.cr index 54438b5..a558735 100644 --- a/src/scheduler/kernel_params.cr +++ b/src/scheduler/kernel_params.cr @@ -7,6 +7,10 @@ class Job return "user=lkp job=/lkp/scheduled/job.yaml RESULT_ROOT=/result/job rootovl ip=dhcp ro" end + private def kernel_custom_params + return @hash["kernel_custom_params"] if @hash["kernel_custom_params"]? + end + private def set_kernel_append_root os_real_path = JobHelper.service_path("#{SRV_OS}/#{os_dir}") @@ -30,7 +34,7 @@ class Job end private def set_kernel_params - self["kernel_params"] = " #{kernel_common_params()} #{kernel_append_root} #{kernel_console()}" + self["kernel_params"] = " #{kernel_common_params()} #{kernel_custom_params()} #{kernel_append_root} #{kernel_console()}" end end -- 2.23.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • ...
  • 524
  • Older →

HyperKitty Powered by HyperKitty