mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 1 participants
  • 5235 discussions
[PATCH v5 compass-ci 3/5] lib/git_mirror.rb: change the way to calculate priority for repo
by Li Yuanchao 24 Mar '21

24 Mar '21
Change the way to calculate priority for repo, to make active repo be updated more times and more effective. Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- lib/git_mirror.rb | 36 +++++++++++++++++++++++++++--------- 1 file changed, 27 insertions(+), 9 deletions(-) diff --git a/lib/git_mirror.rb b/lib/git_mirror.rb index 50a6c75..5ba9154 100644 --- a/lib/git_mirror.rb +++ b/lib/git_mirror.rb @@ -115,7 +115,6 @@ class MirrorMain def initialize @feedback_queue = Queue.new @fork_stat = {} - @priority = 0 @priority_queue = PriorityQueue.new @git_info = {} @defaults = {} @@ -221,11 +220,9 @@ class MirrorMain def push_git_queue return if @git_queue.size >= 1 - fork_key = @priority_queue.delete_min_return_key + fork_key, old_pri = @priority_queue.delete_min do_push(fork_key) - priority_set = @priority > @fork_stat[fork_key][:priority] ? (@priority - @fork_stat[fork_key][:priority]) : 1 - @priority_queue.push fork_key, priority_set - @priority += 1 + @priority_queue.push fork_key, get_repo_priority(fork_key, old_pri) end def main_loop @@ -251,8 +248,7 @@ class MirrorMain @git_info[git_repo] = merge_defaults(git_repo, @git_info[git_repo], belong) fork_stat_init(git_repo) - @priority_queue.push git_repo, @priority - @priority += 1 + @priority_queue.push git_repo, get_repo_priority(git_repo, 0) end def compare_refs(cur_refs, old_refs) @@ -388,8 +384,7 @@ class MirrorMain @git_info[git_repo] = { 'url' => url, 'git_repo' => git_repo, 'is_submodule' => true, 'belong' => belong } fork_stat_init(git_repo) - @priority_queue.push git_repo, @priority - @priority += 1 + @priority_queue.push git_repo, get_repo_priority(git_repo, 0) end end @@ -474,6 +469,8 @@ end # main thread class MirrorMain + WEEK_SECONDS = 604800 + def merge_defaults(object_key, object, belong) return object if object_key == belong @@ -541,4 +538,25 @@ class MirrorMain return true end + + def get_repo_priority(git_repo, old_pri) + old_pri ||= 0 + mirror_dir = "/srv/git/#{@git_info[git_repo]['belong']}/#{git_repo}" + mirror_dir = "#{mirror_dir}.git" unless @git_info[git_repo]['is_submodule'] + + return old_pri + Math.cbrt(WEEK_SECONDS) unless File.directory?(mirror_dir) + + return cal_priority(mirror_dir, old_pri) + end + + def cal_priority(mirror_dir, old_pri) + last_commit_time = %x(git -C #{mirror_dir} log --pretty=format:"%ct" -1 2>/dev/null).to_i + return old_pri + Math.cbrt(WEEK_SECONDS) if last_commit_time.zero? + + t = Time.now.to_i + interval = t - last_commit_time + return old_pri + Math.cbrt(WEEK_SECONDS) if interval <= 0 + + return old_pri + Math.cbrt(interval) + end end -- 2.23.0
1 0
0 0
[PATCH v5 compass-ci 2/5] src/lib/web_backend.rb: implement for group jobs stats count
by Li Yuanchao 24 Mar '21

24 Mar '21
Query jobs by conditions such as group id or suite, and count the number of passed cases and failed cases. output is like: { "kezhiming": { "nr_all": $nr_all, "nr_pass": $nr_pass, "nr_fail": $nr_fail }, "chenqun": { "nr_all": $nr_all, "nr_pass": $nr_pass, "nr_fail": $nr_fail }, ... } Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- src/lib/web_backend.rb | 66 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/src/lib/web_backend.rb b/src/lib/web_backend.rb index 89c0f3c..40471e0 100644 --- a/src/lib/web_backend.rb +++ b/src/lib/web_backend.rb @@ -121,6 +121,8 @@ end def get_dimension_conditions(params) dimension = params.key?(:dimension) ? [params.delete(:dimension)] : [] + dimension = params.key?(:GROUP_BY) ? [params.delete(:GROUP_BY)] : [] if dimension.empty? + conditions = {} FIELDS.each do |f| v = params[f] @@ -548,3 +550,67 @@ def new_refs_statistics(params) end [200, headers.merge('Access-Control-Allow-Origin' => '*'), body] end + +def single_count(stats) + fail_count = 0 + pass_count = 0 + single_nr_fail = 0 + single_nr_pass = 0 + stats.each do |stat, value| + fail_count += 1 if stat.match(/\.fail$/i) + pass_count += 1 if stat.match(/\.pass$/i) + single_nr_fail = value if stat.match(/\.nr_fail$/i) + single_nr_pass = value if stat.match(/\.nr_pass$/i) + end + fail_count = single_nr_fail.zero? ? fail_count : single_nr_fail + pass_count = single_nr_pass.zero? ? pass_count : single_nr_pass + [fail_count, pass_count, fail_count + pass_count] +end + +def count_stats(job_list) + nr_stats = { 'nr_fail' => 0, 'nr_pass' => 0, 'nr_all' => 0 } + job_list.each do |job| + next unless job['_source']['stats'] + + fail_count, pass_count, all_count = single_count(job['_source']['stats']) + nr_stats['nr_fail'] += fail_count + nr_stats['nr_pass'] += pass_count + nr_stats['nr_all'] += all_count + end + nr_stats +end + +def get_jobs_stats_count(dimension, must, size, from) + dimension_list = get_dimension_list(dimension) + stats_count = {} + dimension_list.each do |dim| + job_list = query_dimension(dimension[0], dim, must, size, from) + stats_count[dim] = count_stats(job_list) + end + stats_count.to_json +end + +def get_stats_by_dimension(conditions, dimension, must, size, from) + must += build_multi_field_subquery_body(conditions) + count_query = { query: { bool: { must: must } } } + total = es_count(count_query) + return {} if total < 1 + + get_jobs_stats_count(dimension, must, size, from) +end + +def get_jobs_stats(params) + dimension, conditions = get_dimension_conditions(params) + must = get_es_must(params) + get_stats_by_dimension(conditions, dimension, must, 1000, 0) +end + +def group_jobs_stats(params) + begin + body = get_jobs_stats(params) + rescue StandardError => e + warn e.message + return [500, headers.merge('Access-Control-Allow-Origin' => '*'), 'group jobs table error'] + end + [200, headers.merge('Access-Control-Allow-Origin' => '*'), body] +end -- 2.23.0
1 0
0 0
[PATCH v5 compass-ci 1/5] container/web-backend: add api for group jobs stats count
by Li Yuanchao 24 Mar '21

24 Mar '21
Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- container/web-backend/web-backend | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/container/web-backend/web-backend b/container/web-backend/web-backend index 33a3fae..adb91e0 100755 --- a/container/web-backend/web-backend +++ b/container/web-backend/web-backend @@ -146,3 +146,28 @@ end get '/get_repo_statistics' do new_refs_statistics(params) end + +# GET /get_jobs_summary?suite=iperf&GROUP_BY=my_name +# must: +# - query_conditions +# - suite / group_id / ... +# - GROUP_BY +# - group_id / my_email / my_name +# +# Response like: +# - { +# "kezhiming": { +# "nr_all": $nr_all, +# "nr_pass": $nr_pass, +# "nr_fail: $nr_fail +# }, +# "chenqun": { +# "nr_all": $nr_all, +# "nr_pass": $nr_pass, +# "nr_fail": $nr_fail +# }, +# ... +# } +get '/get_jobs_summary' do + group_jobs_stats(params) +end -- 2.23.0
1 0
0 0
[PATCH v4 compass-ci 5/5] doc/manual: update test-oss-project.en.md
by Li Yuanchao 24 Mar '21

24 Mar '21
Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- doc/manual/test-oss-project.en.md | 152 ++++++++++++++++-------------- 1 file changed, 81 insertions(+), 71 deletions(-) diff --git a/doc/manual/test-oss-project.en.md b/doc/manual/test-oss-project.en.md index 1b551b5..7b72e2b 100644 --- a/doc/manual/test-oss-project.en.md +++ b/doc/manual/test-oss-project.en.md @@ -1,71 +1,81 @@ -# How DO I Use the Compass-CI Platform to Test Open Source Projects? - -This document describes how to use the Compass-CI platform to test open source projects. - -### Adding the URL of the Repository to Be Tested to the upstream-repos Repository - -Perform the following steps to add the information of the code repository to be tested to the **upstream-repos** repository (https://gitee.com/wu_fengguang/upstream-repos) in YAML format: - -1. Fork the upstream-repos repository and clone it to the local host. This document uses the **backlight** repository (https://github.com/baskerville/backlight) as an example. - -![](./../pictures/fork_blacklight.png) - -2. Run the following command to create a file path named with the repository name and its initial letter: - - ``` - mkdir -p b/backlight - ``` - -3. Run the following command to create the **backlight** file in the directory: - - ``` - cd b/backlight - touch backlight - ``` - -4. Run the following command to write the URL of the **backlight** repository to the **backlight** file: - - ``` - vim backlight - ``` - - The format is as follows: - - ``` - --- - url: - - https://github.com/baskerville/backlight - ``` - - > ![](./../public_sys-resources/icon-notice.gif) **Note** - > - > You can refer to the existing file format in the **upstream-repos** repository. Ensure that the formats are consistent. - -5. Run the **Pull Request** command to submit the new **backlight** file to the **upstream-repos** repository. - -### Submitting the Test Task to the Compass-CI Platform - -1. Prepare a test case. - - You can compile and add a test case to the **lkp-tests** repository, or directly use the existing test cases in the **jobs** directory of the **lkp-tests** repository (https://gitee.com/wu_fengguang/lkp-tests) - - * Use the test cases that have been adapted in the repository. - - Use the test cases in the **lkp-tests** repository that meet the requirements. The **iperf.yaml** file is used as an example. The **iperf.yaml** file is a test case that has been adapted. It is stored in the **jobs** directory of the **lkp-tests** repository, and contains some basic test parameters. - - * Compile a test case and add it to the repository. - - For details, see [How To Add Test Cases](https://gitee.com/wu_fengguang/lkp-tests/blob/master/doc/add-testcas…. - -2. Configure the **auto\_submit.yaml** file and submit the test task. - - You only need to add the following configuration information to the **sbin/auto\_submit.yaml** file in the **compass-ci** repository: - - ``` - b/backlight/backlight: - - testbox=vm-2p8g os=openEuler os_version=20.03 os_mount=initramfs os_arch=aarch64 iperf.yaml - ``` - - Submit the modified **auto\_submit.yaml** file to the **compass-ci** repository using Pull Request. Then you can use the Compass-CI platform to test your project. - - For details about how to set parameters in the **auto\_submit.yaml** file, see https://gitee.com/wu_fengguang/compass-ci/tree/master/doc/job. +# How DO I Use the Compass-CI Platform to Test Open Source Projects? + +This document describes how to use the Compass-CI platform to test open source projects. + +### Adding the URL of the Repository to Be Tested to the upstream-repos Repository + +Perform the following steps to add the information of the code repository to be tested to the **upstream-repos** repository (https://gitee.com/wu_fengguang/upstream-repos) in YAML format: + +1. Fork the upstream-repos repository and clone it to the local host. This document uses the **backlight** repository (https://github.com/baskerville/backlight) as an example. + +![](./../pictures/fork_blacklight.png) + +2. Run the following command to create a file path named with the repository name and its initial letter: + + ``` + mkdir -p b/backlight + ``` + +3. Run the following command to create the **backlight** file in the directory: + + ``` + cd b/backlight + touch backlight + ``` + +4. Run the following command to write the URL of the **backlight** repository to the **backlight** file: + + ``` + vim backlight + ``` + + The format is as follows: + + ``` + --- + url: + - https://github.com/baskerville/backlight + ``` + + > ![](./../public_sys-resources/icon-notice.gif) **Note** + > + > You can refer to the existing file format in the **upstream-repos** repository. Ensure that the formats are consistent. + +5. Run the **Pull Request** command to submit the new **backlight** file to the **upstream-repos** repository. + +### Submitting the Test Task to the Compass-CI Platform + +1. Prepare a test case. + + You can compile and add a test case to the **lkp-tests** repository, or directly use the existing test cases in the **jobs** directory of the **lkp-tests** repository (https://gitee.com/wu_fengguang/lkp-tests) + + * Use the test cases that have been adapted in the repository. + + Use the test cases in the **lkp-tests** repository that meet the requirements. The **iperf.yaml** file is used as an example. The **iperf.yaml** file is a test case that has been adapted. It is stored in the **jobs** directory of the **lkp-tests** repository, and contains some basic test parameters. + + * Compile a test case and add it to the repository. + + For details, see [How To Add Test Cases](https://gitee.com/wu_fengguang/lkp-tests/blob/master/doc/add-testcas…. + +2. Configure **DEFAULTS** files in **upstream-repos** repository and submit the test task. + + You only need to add **DEFAULTS** file in the directory of file **backlight** we referred to above, and configure it like: + + ``` + submit: + - command: testbox=vm-2p16g os=openeuler os_version=20.03 os_mount=cifs os_arch=aarch64 api-avx2neon.yaml + branches: + - master + - next + - command: testbox=vm-2p16g os=openeuler os_version=20.03 os_mount=cifs os_arch=aarch64 other-avx2neon.yaml + branches: + - branch_name_a + - branch_name_b + + ``` + + Submit the modified **DEFAULTS** file to the **upstream-repos** repository using Pull Request. Then you can use the Compass-CI platform to test your project. + + For details about how to configure DEFAULTS files, see https://gitee.com/wu_fengguang/upstream-repos/blob/master/README.md. + + For meaning and effect of parameters in the command, see https://gitee.com/wu_fengguang/compass-ci/tree/master/doc/job. -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci] container/srv-http/Dockerfile: build h5ai by builder
by Lu Weitao 24 Mar '21

24 Mar '21
[Why] use node:alpine as a Builder for build h5ai we can reduce install app such: nginx, node, npm to save deploy time Signed-off-by: Lu Weitao <luweitaobe(a)163.com> --- container/srv-http/Dockerfile | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/container/srv-http/Dockerfile b/container/srv-http/Dockerfile index 6564730..b7d0677 100644 --- a/container/srv-http/Dockerfile +++ b/container/srv-http/Dockerfile @@ -1,25 +1,32 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM alpine:3.9 +FROM node:alpine as Builder + +MAINTAINER Lu Weitao <luweitaobe(a)163.com> + +RUN apk add git + +RUN git clone https://gitee.com/luweitao_y/h5ai.git /h5ai + +WORKDIR /h5ai + +RUN npm install && \ + npm run build + +FROM nginx:alpine ADD root / -RUN apk add --no-cache nginx php7 php7-fpm php7-session php7-json php7-exif php7-imagick php7-gd php7-fileinfo nodejs npm git \ - && mkdir /run/nginx/ +RUN apk add --no-cache php7 php7-fpm php7-session php7-json php7-exif php7-imagick php7-gd php7-fileinfo RUN sed -i '/\[global\]/a daemonize = no' /etc/php7/php-fpm.conf RUN sed -i "s/user = nobody/user = nginx/g" /etc/php7/php-fpm.d/www.conf RUN sed -i "s/group = nobody/group = nginx/g" /etc/php7/php-fpm.d/www.conf -RUN git clone https://gitee.com/luweitao_y/h5ai.git /h5ai -WORKDIR /h5ai +COPY --from=Builder /h5ai/build/_h5ai /srv/_h5ai -# build h5ai package -RUN npm install \ - && npm run build \ - && cp -r ./build/_h5ai /srv \ - && chown -R nginx:nginx /srv/_h5ai +RUN chown -R nginx:nginx /srv/_h5ai ENTRYPOINT ["/sbin/entrypoint.sh"] -- 2.23.0
1 0
0 0
[PATCH compass-ci] container/srv-http/Dockerfile: build h5ai by builder
by Lu Weitao 24 Mar '21

24 Mar '21
[Why] use node:alpine as a Builder for build h5ai we can reduce install app such: nginx, node, npm to save deploy time Signed-off-by: Lu Weitao <luweitaobe(a)163.com> --- container/srv-http/Dockerfile | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/container/srv-http/Dockerfile b/container/srv-http/Dockerfile index 6564730..81a8a45 100644 --- a/container/srv-http/Dockerfile +++ b/container/srv-http/Dockerfile @@ -1,25 +1,30 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -FROM alpine:3.9 +FROM node:alpine as Builder + +RUN apk add git + +RUN git clone https://gitee.com/luweitao_y/h5ai.git /h5ai + +WORKDIR /h5ai + +RUN npm install && \ + npm run build + +FROM nginx:alpine ADD root / -RUN apk add --no-cache nginx php7 php7-fpm php7-session php7-json php7-exif php7-imagick php7-gd php7-fileinfo nodejs npm git \ - && mkdir /run/nginx/ +RUN apk add --no-cache php7 php7-fpm php7-session php7-json php7-exif php7-imagick php7-gd php7-fileinfo RUN sed -i '/\[global\]/a daemonize = no' /etc/php7/php-fpm.conf RUN sed -i "s/user = nobody/user = nginx/g" /etc/php7/php-fpm.d/www.conf RUN sed -i "s/group = nobody/group = nginx/g" /etc/php7/php-fpm.d/www.conf -RUN git clone https://gitee.com/luweitao_y/h5ai.git /h5ai -WORKDIR /h5ai +COPY --from=Builder /h5ai/build/_h5ai /srv/_h5ai -# build h5ai package -RUN npm install \ - && npm run build \ - && cp -r ./build/_h5ai /srv \ - && chown -R nginx:nginx /srv/_h5ai +RUN chown -R nginx:nginx /srv/_h5ai ENTRYPOINT ["/sbin/entrypoint.sh"] -- 2.23.0
2 2
0 0
[PATCH v4 compass-ci 2/5] src/lib/web_backend.rb: implement for group jobs stats count
by Li Yuanchao 24 Mar '21

24 Mar '21
Query jobs by conditions such as group id or suite, and count the number of passed cases and failed cases. output is like: { "kezhiming": { "nr_all": $nr_all, "nr_pass": $nr_pass, "nr_fail": $nr_fail }, "chenqun": { "nr_all": $nr_all, "nr_pass": $nr_pass, "nr_fail": $nr_fail }, ... } Signed-off-by: Li Yuanchao <lyc163mail(a)163.com> --- src/lib/web_backend.rb | 68 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/src/lib/web_backend.rb b/src/lib/web_backend.rb index 89c0f3c..04bccdb 100644 --- a/src/lib/web_backend.rb +++ b/src/lib/web_backend.rb @@ -121,6 +121,8 @@ end def get_dimension_conditions(params) dimension = params.key?(:dimension) ? [params.delete(:dimension)] : [] + dimension = params.key?(:GROUP_BY) ? [params.delete(:GROUP_BY)] : [] if dimension.empty? + conditions = {} FIELDS.each do |f| v = params[f] @@ -548,3 +550,69 @@ def new_refs_statistics(params) end [200, headers.merge('Access-Control-Allow-Origin' => '*'), body] end + +def single_count(stats) + fail_count = 0 + pass_count = 0 + single_nr_fail = 0 + single_nr_pass = 0 + stats.each do |stat, value| + fail_count += 1 if stat.match(/\.fail$/i) + pass_count += 1 if stat.match(/\.pass$/i) + single_nr_fail = value if stat.match(/\.nr_fail$/i) + single_nr_pass = value if stat.match(/\.nr_pass$/i) + end + fail_count = single_nr_fail.zero? ? fail_count : single_nr_fail + pass_count = single_nr_pass.zero? ? pass_count : single_nr_pass + [fail_count, pass_count, fail_count + pass_count] +end + +def count_stats(job_list) + nr_fail = 0 + nr_pass = 0 + nr_all = 0 + job_list.each do |job| + next unless job['_source']['stats'] + + fail_count, pass_count, all_count = single_count(job['_source']['stats']) + nr_fail += fail_count + nr_pass += pass_count + nr_all += all_count + end + { 'nr_fail' => nr_fail, 'nr_pass' => nr_pass, 'nr_all' => nr_all } +end + +def get_jobs_stats_count(dimension, must, size, from) + dimension_list = get_dimension_list(dimension) + stats_count = {} + dimension_list.each do |dim| + job_list = query_dimension(dimension[0], dim, must, size, from) + stats_count[dim] = count_stats(job_list) + end + stats_count.to_json +end + +def get_stats_by_dimension(conditions, dimension, must, size, from) + must += build_multi_field_subquery_body(conditions) + count_query = { query: { bool: { must: must } } } + total = es_count(count_query) + return {} if total < 1 + + get_jobs_stats_count(dimension, must, size, from) +end + +def get_jobs_stats(params) + dimension, conditions = get_dimension_conditions(params) + must = get_es_must(params) + get_stats_by_dimension(conditions, dimension, must, 1000, 0) +end + +def group_jobs_stats(params) + begin + body = get_jobs_stats(params) + rescue StandardError => e + warn e.message + return [500, headers.merge('Access-Control-Allow-Origin' => '*'), 'group jobs table error'] + end + [200, headers.merge('Access-Control-Allow-Origin' => '*'), body] +end -- 2.23.0
2 2
0 0
[PATCH lkp-tests] tests/rpmbuild-pkg: add build way from srpm
by Wang Yong 24 Mar '21

24 Mar '21
original build RPMs from git repo, now support building from SRPM Signed-off-by: Wang Yong <wangyong0117(a)qq.com> --- tests/rpmbuild-pkg | 73 ++++++++++++++++++++++++++++++++++++---------- 1 file changed, 57 insertions(+), 16 deletions(-) diff --git a/tests/rpmbuild-pkg b/tests/rpmbuild-pkg index 256e836bf..17169a31b 100755 --- a/tests/rpmbuild-pkg +++ b/tests/rpmbuild-pkg @@ -1,6 +1,8 @@ #!/bin/bash # - upstream_repo # - compat_os +# - repo_name +# - repo_addr . $LKP_SRC/lib/debug.sh . $LKP_SRC/lib/upload.sh @@ -8,9 +10,38 @@ : "${compat_os:=budding-openeuler}" [ -n "$upstream_repo" ] || die "upstream_repo is empty" -package_name=${upstream_repo##*/} -rpm_dest="/initrd/rpmbuild-pkg/${os}-${os_version}/${compat_os}/${os_arch}/Packages" -src_rpm_dest="/initrd/rpmbuild-pkg/${os}-${os_version}/${compat_os}/source/Packages" + +dest_dir="/initrd/rpmbuild-pkg/${os}-${os_version}/${compat_os}" + +from_git() +{ + package_name=${upstream_repo##*/} + rpm_dest="${dest_dir}/${os_arch}/Packages" + src_rpm_dest="${dest_dir}/source/Packages" + + init_workspace + download_upstream_repo +} + +from_srpm() +{ + [ -n "$repo_name" ] || die "repo_name is empty" + [ -n "$repo_addr" ] || die "repo_addr is empty" + + rpm_dest="${dest_dir}/${repo_name}/${os_arch}/Packages" + src_rpm_dest="${dest_dir}/${repo_name}/source/Packages" + + install_srpm +} + +check_flow() +{ + if [ -n "$repo_name" ]; then + from_srpm + else + from_git + fi +} init_workspace() { @@ -20,30 +51,41 @@ init_workspace() download_upstream_repo() { - git clone "git://$GIT_SERVER/openeuler/${upstream_repo}" || die "clone git repo ${package_name} failed: git://$GIT_SERVER/openeuler/${upstream_repo}" - cd $package_name || exit + git clone "git://${GIT_SERVER}/openeuler/${upstream_repo}" || die "clone git repo ${package_name} failed: git://${GIT_SERVER}/openeuler/${upstream_repo}" + cd "$package_name" || exit filelist=$(git ls-files) + for pkgfile in ${filelist[@]} do local dir="SOURCES" - echo $pkgfile | egrep "\.spec$" && dir="SPECS" + + echo "$pkgfile" | grep -E "\\.spec$" && dir="SPECS" mv "$pkgfile" "${HOME}/rpmbuild/${dir}/" done } -build_rpm() +install_srpm() { - local spec_file=${HOME}/rpmbuild/SPECS/$package_name.spec - - # HTTP is proxy cache friendly - sed -i 's/^\(Source[^ ]*:[ \t]*\)https/\1http/g' `grep http -rl $spec_file` + rpm -i "${repo_addr}/${upstream_repo}" >/dev/null || die "failed to install source rpm" +} +build_rpm() +{ + local spec_dir="${HOME}/rpmbuild/SPECS" + [ -n "$package_name" ] && + { + # HTTP is proxy cache friendly + sed -i 's/^\(Source[^ ]*:[ \t]*\)https/\1http/g' "$(grep http -rl "$spec_dir/${package_name}.spec")" + } # Install build depends - yum-builddep -y $spec_file || exit + yum-builddep -y "$spec_dir"/*.spec || die "failed to solve dependencies" # Download tar.gz to default path ${HOME}/rpmbuild/SOURCE - spectool -g -R $spec_file || exit + [ -n "$package_name" ] && + { + spectool -g -R "$spec_dir/${package_name}.spec" || die "failed to download source file" + } # Building rpm or srpm packages - rpmbuild -ba $spec_file || exit + rpmbuild -ba "$spec_dir"/*.spec || die "failed to build rpms" } show_rpm_files() @@ -71,7 +113,6 @@ upload_rpm_pkg() done } -init_workspace -download_upstream_repo +check_flow build_rpm upload_rpm_pkg -- 2.23.0
2 2
0 0
[PATCH v3 compass-ci 2/2] provides: fix require error
by Wu Zhende 24 Mar '21

24 Mar '21
[Error] can't load such file "../lib/mq_client" [How] use require_relative Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- providers/multi-qemu | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/providers/multi-qemu b/providers/multi-qemu index d3f8406..01ba3d5 100755 --- a/providers/multi-qemu +++ b/providers/multi-qemu @@ -7,7 +7,7 @@ require 'fileutils' require 'optparse' require 'json' -require '../lib/mq_client' +require_relative "../lib/mq_client" opt = {} options = OptionParser.new do |opts| -- 2.23.0
2 1
0 0
[PATCH v3 compass-ci 1/2] service/sshr: change port
by Wu Zhende 24 Mar '21

24 Mar '21
Restarting the sshr service will disconnect the existing sshr connection and the machine that has been applied for can't be accessed. Therefore, we can't restart the sshr service. Instead, we start a new sshr service and use a new port. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- container/scheduler-https/start | 2 +- container/scheduler/start | 4 ++-- container/ssh-r/start | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/container/scheduler-https/start b/container/scheduler-https/start index ee93e79..4ec84a2 100755 --- a/container/scheduler-https/start +++ b/container/scheduler-https/start @@ -38,7 +38,7 @@ names = Set.new %w[ ] defaults = relevant_defaults(names) -defaults['SSHR_PORT'] ||= 5050 +defaults['SSHR_PORT'] ||= 5051 defaults['SSHR_PORT_BASE'] ||= 21000 defaults['SSHR_PORT_LEN'] ||= 2000 defaults['SCHED_PORT'] ||= '3000' diff --git a/container/scheduler/start b/container/scheduler/start index e655dd4..8d800f9 100755 --- a/container/scheduler/start +++ b/container/scheduler/start @@ -31,8 +31,8 @@ names = Set.new %w[ ] defaults = relevant_defaults(names) -defaults['SSHR_PORT'] ||= 5050 -defaults['SSHR_PORT_BASE'] ||= 50000 +defaults['SSHR_PORT'] ||= 5051 +defaults['SSHR_PORT_BASE'] ||= 21000 defaults['SSHR_PORT_LEN'] ||= 2000 defaults['SCHED_PORT'] ||= '3000' defaults['SCHED_HOST'] ||= '172.17.0.1' diff --git a/container/ssh-r/start b/container/ssh-r/start index 6c7a373..68a63f2 100755 --- a/container/ssh-r/start +++ b/container/ssh-r/start @@ -16,7 +16,7 @@ cmd=( -e TCP_FORWARDING=true -d -p 21000-23999:21000-23999 - -p 5050:22 + -p 5051:22 -v /etc/localtime:/etc/localtime:ro -v /srv/pub/sshr/keys/:/etc/ssh/keys ssh-r:0.001 -- 2.23.0
1 0
0 0
  • ← Newer
  • 1
  • ...
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • ...
  • 524
  • Older →

HyperKitty Powered by HyperKitty