mailweb.openeuler.org
Manage this list

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Compass-ci

Threads by month
  • ----- 2025 -----
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
compass-ci@openeuler.org

  • 1 participants
  • 5236 discussions
[PATCH compass-ci] service/lifecycle: add lifecycle
by Wu Zhende 03 Mar '21

03 Mar '21
Function: processing timeout jobs/machines processing crash jobs/machines Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/lib/lifecycle.cr | 310 +++++++++++++++++++++++++++++++++++++++++-- src/lifecycle.cr | 17 ++- 2 files changed, 311 insertions(+), 16 deletions(-) diff --git a/src/lib/lifecycle.cr b/src/lib/lifecycle.cr index af5cd07..8c52f11 100644 --- a/src/lib/lifecycle.cr +++ b/src/lib/lifecycle.cr @@ -1,19 +1,40 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +require "set" require "kemal" require "yaml" -require "./web_env" +require "./mq" +require "./scheduler_api" require "../scheduler/elasticsearch_client" +require "../lifecycle/constants" + +class String + def bigger_than?(time) + return false if self.empty? + + time = time.to_s + return true if time.empty? + + time = Time.parse(time, "%Y-%m-%dT%H:%M:%S", Time.local.location) + self_time = Time.parse(self, "%Y-%m-%dT%H:%M:%S", Time.local.location) + + self_time > time + end +end class Lifecycle property es - def initialize(env : HTTP::Server::Context) + def initialize + @mq = MQClient.instance @es = Elasticsearch::Client.new - @env = env - @log = env.log.as(JSONLogger) + @scheduler_api = SchedulerAPI.new + @log = JSONLogger.new + @jobs = Hash(String, JSON::Any).new + @machines = Hash(String, JSON::Any).new + @match = Hash(String, Set(String)).new {|h, k| h[k] = Set(String).new} end def alive(version) @@ -22,18 +43,287 @@ class Lifecycle @log.warn(e) end - def get_running_testbox - size = @env.params.query["size"]? || 20 - from = @env.params.query["from"]? || 0 + def init_from_es + jobs = get_active_jobs + jobs.each do |result| + job_id = result["_id"].to_s + job = result["_source"].as_h + job.delete_if{|key, _| !JOB_KEYWORDS.includes?(key)} + + @jobs[job_id] = JSON.parse(job.to_json) + @match[job["testbox"].to_s] << job_id + end + + machines = get_active_machines + machines.each do |result| + testbox = result["_id"].to_s + machine = result["_source"].as_h + machine.delete("history") + + machine = JSON.parse(machine.to_json) + @machines[testbox] = machine + + deal_match_job(testbox, machine["job_id"].to_s) + end + end + + def deal_match_job(testbox, job_id) + @match[testbox].each do |id| + next if id == job_id + + msg = { + "job_id" => id, + "job_state" => "occupied", + "testbox" => testbox + } + @mq.pushlish_confirm("job_mq", msg.to_json) + @match[testbox].delete(id) + end + end + + def get_active_jobs + query = { + "size" => 10000, + "query" => { + "term" => { + "job_state" => "boot" + } + } + } + @es.search("jobs", query) + end + + def get_active_machines query = { - "size" => size, - "from" => from, + "size" => 10000, "query" => { "terms" => { - "state" => ["booting", "running"] + "state" => ["booting", "running", "rebooting"] } } } @es.search("testbox", query) end + + def deal_job_events_from_mq + q = @mq.ch.queue("job_mq") + q.subscribe(no_ack: false) do |msg| + event = JSON.parse(msg.body_io.to_s) + job_state = event["job_state"]? + + case job_state + when "boot" + deal_boot_event(event) + when "close" + deal_close_event(event) + when "occupied" + deal_occupied_event(event) + else + deal_other_event(event) + end + @mq.ch.basic_ack(msg.delivery_tag) + end + end + + def deal_other_event(event) + event_job_id = event["job_id"].to_s + return if event_job_id.empty? + + update_cached_job(event_job_id, event) + + job = @jobs[event_job_id]? + return unless job + + testbox = job["testbox"].to_s + update_cached_machine(testbox, event) + end + + def update_cached_machine(testbox, event) + machine = @machines[testbox]? + return if machine && !event["time"].to_s.bigger_than?(machine["time"]?) + + update_es_machine_time(testbox, event) + end + + def update_es_machine_time(testbox, event) + machine = @es.get_tbox(testbox) + return unless machine + return unless event["time"].to_s.bigger_than?(machine["time"]?) + + machine.as_h.delete("history") + machine.as_h["time"] = event["time"] + machine.as_h["state"] = JSON::Any.new("booting") + @machines[testbox] = machine + @es.update_tbox(testbox, machine.as_h) + end + + def update_cached_job(job_id, event) + job = @jobs[job_id]? + if job + @jobs[job_id] = JSON.parse(job.as_h.merge!(event.as_h).to_json) + else + job = @es.get_job(job_id) + return unless job + return if JOB_CLOSE_STATE.includes?(job["job_state"]?) + + job = job.dump_to_json_any.as_h + job.delete_if{|key, _| !JOB_KEYWORDS.includes?(key)} + job["job_state"] = event["job_state"] + @jobs[job_id] = JSON.parse(job.to_json) + end + end + + def deal_occupied_event(event) + event_job_id = event["job_id"].to_s + return unless @jobs.has_key?(event_job_id) + + @jobs.delete(event_job_id) + spawn @scheduler_api.close_job(event_job_id, "occupied", "lifecycle") + end + + def deal_close_event(event) + event_job_id = event["job_id"].to_s + job = @jobs[event_job_id] + + return unless job + + @jobs.delete(event_job_id) + update_cached_machine(job["testbox"].to_s, event) + end + + def deal_boot_event(event) + event_job_id = event["job_id"]?.to_s + @jobs[event_job_id] = event unless event_job_id.empty? + machine = @machines[event["testbox"]]? + deal_boot_machine(machine, event) + end + + def deal_boot_machine(machine, event) + event_job_id = event["job_id"]?.to_s + if machine + machine_job_id = machine["job_id"].to_s + # The job is not updated + # No action is required + return if event_job_id == machine_job_id + + time = machine["time"]? + # Skip obsolete event + return unless event["time"].to_s.bigger_than?(time) + + @machines[event["testbox"].to_s] = event + deal_match_job(event["testbox"].to_s, event_job_id) + + # No previous job to process + return if machine_job_id.empty? + return unless @jobs.has_key?(machine_job_id) + + @jobs.delete(machine_job_id) + spawn @scheduler_api.close_job(machine_job_id, "occupied", "lifecycle") + else + @machines[event["testbox"].to_s] = event + end + end + + def max_time(times) + result = "" + times.each do |time| + result = time if time.to_s.bigger_than?(result) + end + return result + end + + def deal_timeout_job + dead_job_id = nil + loop do + close_job(dead_job_id, "timeout") if dead_job_id + deadline, dead_job_id = get_min_deadline + + # deal timeout job + next if dead_job_id && deadline <= Time.local + + sleep_until(deadline) + end + end + + def deal_timeout_machine + dead_machine_name = nil + loop do + reboot_timeout_machine(dead_machine_name) if dead_machine_name + deadline, dead_machine_name = get_min_deadline_machine + + next if dead_machine_name && deadline <= Time.local + + sleep_until(deadline) + end + end + + def sleep_until(deadline) + s = (deadline - Time.local).total_seconds + sleep(s) + end + + def get_min_deadline + deadline = (Time.local + 60.second) + dead_job_id = nil + @jobs.each do |id, job| + next unless job["deadline"]? + job_deadline = Time.parse(job["deadline"].to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return job_deadline, id if Time.local >= job_deadline + next unless deadline > job_deadline + + deadline = job_deadline + dead_job_id = id + end + return deadline, dead_job_id + end + + def get_min_deadline_machine + deadline = (Time.local + 60.second) + dead_machine_name = nil + @machines.each do |name, machine| + next if machine["deadline"]?.to_s.empty? + machine_deadline = Time.parse(machine["deadline"].to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return machine_deadline, name if Time.local >= machine_deadline + next unless deadline > machine_deadline + + deadline = machine_deadline + dead_machine_name = name + end + return deadline, dead_machine_name + end + + def close_job(job_id, reason) + @jobs.delete(job_id) + spawn @scheduler_api.close_job(job_id, reason, "lifecycle") + end + + def reboot_timeout_machine(testbox) + @machines.delete(testbox) + machine = @es.get_tbox(testbox) + + return unless machine + return if MACHINE_CLOSE_STATE.includes?(machine["state"]) + + deadline = machine["deadline"]? + return unless deadline + + deadline = Time.parse(deadline.to_s, "%Y-%m-%dT%H:%M:%S", Time.local.location) + return if Time.local < deadline + + mq_queue = get_machine_reboot_queue(testbox) + @mq.pushlish_confirm(mq_queue, machine.to_json) + + machine["state"] = "rebooting_queue" + machine["time"] = Time.local.to_s("%Y-%m-%dT%H:%M:%S+0800") + @es.update_tbox(testbox, machine.as_h) + end + + def get_machine_reboot_queue(testbox) + if testbox.includes?(".") + testbox =~ /.*\.(.*)-\d+$/ + else + testbox =~ /(.*)--.*/ + end + $1 + end end diff --git a/src/lifecycle.cr b/src/lifecycle.cr index a864621..f73cef5 100644 --- a/src/lifecycle.cr +++ b/src/lifecycle.cr @@ -2,15 +2,20 @@ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. require "lifecycle/lifecycle" -require "./lifecycle/constants.cr" +require "./lifecycle/constants" require "./lib/json_logger" +require "./lib/lifecycle" module Cycle log = JSONLogger.new + lifecycle = Lifecycle.new - begin - Kemal.run(LIFECYCLE_PORT) - rescue e - log.error(e) - end + # init @jobs and @machines + lifecycle.init_from_es + lifecycle.deal_job_events_from_mq + + spawn lifecycle.deal_timeout_job + spawn lifecycle.deal_timeout_machine + + Kemal.run(LIFECYCLE_PORT) end -- 2.23.0
1 0
0 0
[PATCH v2 compass-ci] container/submit: add attach directory
by Luan Shengde 03 Mar '21

03 Mar '21
add attach lkp-tests to the container when run the container to submit jobs. [why] enable user edit code or job file(s) to submit jobs. user can edit the code or job file file according to their requires. Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/submit/submit | 1 + 1 file changed, 1 insertion(+) diff --git a/container/submit/submit b/container/submit/submit index 6236892..ada05f9 100755 --- a/container/submit/submit +++ b/container/submit/submit @@ -13,6 +13,7 @@ cmd=( --name=submit-$USER-$data_suffix -it -v /etc/compass-ci:/etc/compass-ci:ro + -v $LKP_SRC:/root/lkp-tests:ro -v $HOME/.config:/root/.config:ro -v $HOME/.ssh:/root/.ssh:rw submit -- 2.23.0
1 0
0 0
[PATCH compass-ci] scheduler/elasticsearch_client.cr: skip requesting state
by Wu Zhende 03 Mar '21

03 Mar '21
When testbox is updated, the current status is saved to the history. The "requesting" state should be skipped, because job_id is empty. It's useless to store this information. Signed-off-by: Wu Zhende <wuzhende666(a)163.com> --- src/scheduler/elasticsearch_client.cr | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/scheduler/elasticsearch_client.cr b/src/scheduler/elasticsearch_client.cr index e343809..91fb84a 100644 --- a/src/scheduler/elasticsearch_client.cr +++ b/src/scheduler/elasticsearch_client.cr @@ -105,7 +105,7 @@ class Elasticsearch::Client end history ||= [] of JSON::Any - history << JSON.parse(wtmp_hash.to_json) + history << JSON.parse(wtmp_hash.to_json) unless wtmp_hash["state"]?.to_s == "requesting" history = JSON.parse(history.to_json) body = { "history" => history} -- 2.23.0
1 0
0 0
[PATCH v4 compass-ci] doc/manual: add document for submit container
by Luan Shengde 03 Mar '21

03 Mar '21
Signed-off-by: Luan Shengde <shdluan(a)163.com> --- doc/manual/build-lkp-test-container.en.md | 61 +++++++++++++++++++++++ 1 file changed, 61 insertions(+) create mode 100644 doc/manual/build-lkp-test-container.en.md diff --git a/doc/manual/build-lkp-test-container.en.md b/doc/manual/build-lkp-test-container.en.md new file mode 100644 index 0000000..b4f06bd --- /dev/null +++ b/doc/manual/build-lkp-test-container.en.md @@ -0,0 +1,61 @@ +# Preface + +We provide a docker container to suit various of Linux OS(es). +In this case you do not need to install the lkp-tests to your local server. +Also you can avoid installation failures from undesired dependency package(s). + +# Prepare + +- install docker +- apply account and config default yaml +- generate ssh key(s) + +# build container + +## 1. download resource + + Download lkp-tests and compass-ci to your local server. + + Command(s): + + git clone https://gitee.com/wu_fengguang/compass-ci.git + git clone https://gitee.com/wu_fengguang/lkp-tests.git + +## 2. add environment variable(s) + + Command(s): + + echo "export LKP_SRC=$PWD/lkp-tests" >> ~/.${SHELL##*/}rc + echo "export CCI_SRC=$PWD/compass-ci" >> ~/.${SHELL##*/}rc + source ~/.${SHELL##*/}rc + +## 3. build docker image + + Command(s): + + cd compass-ci/container/submit + ./build + +## 4. add executable file + + Command(s): + + ln -s $CCI_SRC/container/submit/submit /usr/bin/submit + +# try it + + instruction: + + You can directly use the command 'submit' to submit jobs. + It is the same as you install the lkp-tests on your own server. + It will start a disposable container to submit your job. + The container will attach the directory lkp-test to the container itself. + You can edit the job yaml(s) in lkp-test/jobs and it will take effect when you submit jobs. + + Example: + + submit -c -m testbox=vm-2p8g borrow-1h.yaml + + About submit: + + For detailed usage for command submit, please reference to: [submit user manual](https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/manual/su… -- 2.23.0
1 0
0 0
[PATCH compass-ci] container/submit: add attach directory
by Luan Shengde 03 Mar '21

03 Mar '21
add attach lkp-tests/hosts to the container when run the container [why] enable user submit jobs with custom job files. user can edit the job files according their requires, and the job file will take effect when use submit contanier to submit jobs. Signed-off-by: Luan Shengde <shdluan(a)163.com> --- container/submit/submit | 1 + 1 file changed, 1 insertion(+) diff --git a/container/submit/submit b/container/submit/submit index 6236892..ca6eb72 100755 --- a/container/submit/submit +++ b/container/submit/submit @@ -13,6 +13,7 @@ cmd=( --name=submit-$USER-$data_suffix -it -v /etc/compass-ci:/etc/compass-ci:ro + -v $LKP_SRC/jobs:/root/lkp-tests/jobs:ro -v $HOME/.config:/root/.config:ro -v $HOME/.ssh:/root/.ssh:rw submit -- 2.23.0
2 2
0 0
[PATCH compass-ci 2/4] sparrow/0-package/read-config: export config yaml info
by Liu Yinsi 03 Mar '21

03 Mar '21
export my_email, my_name, server_ip to deploy compass-ci. Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- sparrow/0-package/read-config | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100755 sparrow/0-package/read-config diff --git a/sparrow/0-package/read-config b/sparrow/0-package/read-config new file mode 100755 index 0000000..9da6ffc --- /dev/null +++ b/sparrow/0-package/read-config @@ -0,0 +1,14 @@ +#!/bin/bash +# SPDX-License-Identifier: MulanPSL-2.0+ +# Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. +# export config info setup.yaml server_ip, my_email, my_name. + +mkdir -p /etc/compass-ci/ +cp -a $CCI_SRC/sparrow/setup.yaml /etc/compass-ci/setup.yaml + +options=( server_ip my_name my_email ) + +for option in ${options[@]} +do + export $option=$(grep "^$option:" /etc/compass-ci/setup.yaml |awk -F ": " '{print $2}') +done -- 2.23.0
2 2
0 0
[PATCH lkp-tests] setup/simplify-ci: call $CCI_SRC/sparrow/install-client
by Liu Yinsi 03 Mar '21

03 Mar '21
move simplify-ci main function to $CCI_SRC/sparrow/install-client script, make code more reusablely. Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- setup/simplify-ci | 61 ++++------------------------------------------- 1 file changed, 5 insertions(+), 56 deletions(-) diff --git a/setup/simplify-ci b/setup/simplify-ci index 5aede9cdb..107cef2e8 100755 --- a/setup/simplify-ci +++ b/setup/simplify-ci @@ -2,8 +2,7 @@ # SPDX-License-Identifier: MulanPSL-2.0+ # Copyright (c) 2020 Huawei Technologies Co., Ltd. All rights reserved. -: ${SCHED_HOST:=172.17.0.1} -: ${SCHED_PORT:=3000} +export server_ip=$SCHED_HOST git_ci() { @@ -17,61 +16,11 @@ git_ci() git clone https://gitee.com/wu_fengguang/compass-ci.git /c/compass-ci || return 1 } -dev_env() +deploy() { - export sched_host=$SCHED_HOST - export sched_port=$SCHED_PORT - 3-code/dev-env + cd /c/compass-ci/sparrow && ./install-client } -install_env() -{ - cd /c/compass-ci/sparrow || return - 0-package/install - 1-storage/tiny - 5-build/ipxe & - 1-storage/permission - 2-network/br0 - 2-network/iptables - 3-code/git - dev_env - . /etc/profile.d/compass.sh -} - -boot_ipxe() -{ - sed -i "s%172.17.0.1%$SCHED_HOST%g" /tftpboot/boot.ipxe - sed -i "s%3000%$SCHED_PORT%g" /tftpboot/boot.ipxe -} - -run_service() -{ - ( - cd $CCI_SRC/container/dnsmasq || return - ./build - ./start - boot_ipxe - )& - ( - cd $CCI_SRC/container/qemu-efi || return - ./build - ./install - )& - ( - cd $CCI_SRC/container/fluentd-base || return - ./build - cd $CCI_SRC/container/sub-fluentd || return - ./build - ./start - )& -} - -main() -{ - git_ci || return 1 - install_env - run_service -} +git_ci || return 1 +deploy -main -wait -- 2.23.0
2 2
0 0
[PATCH v2 compass-ci] doc/manual: add document for submit container
by Luan Shengde 03 Mar '21

03 Mar '21
Signed-off-by: Luan Shengde <shdluan(a)163.com> --- doc/manual/build-lkp-test-container.en.md | 57 +++++++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 doc/manual/build-lkp-test-container.en.md diff --git a/doc/manual/build-lkp-test-container.en.md b/doc/manual/build-lkp-test-container.en.md new file mode 100644 index 0000000..d021185 --- /dev/null +++ b/doc/manual/build-lkp-test-container.en.md @@ -0,0 +1,57 @@ +# Preface + +We provide a docker container to suit various of Linux OS(es). +In this case you do not need install the lkp-tests to your own server. +Also you can avoid installation failures for undesired dependency packages. + +# Prepare + +- install docker +- apply account and config default yaml +- generate ssh keys + +# build submit container + +## 1. download resource + + Use the following command to downloac lkp-test and compass-ci + + git clone https://gitee.com/wu_fengguang/compass-ci.git + git clone https://gitee.com/wu_fengguang/lkp-tests.git + +## 2. setup environment variables + + Command: + + echo "export LKP_SRC=$PWD/lkp-tests" >> ~/.${SHELL##*/}rc + echo "export CCI_SRC=$PWD/compass-ci" >> ~/.${SHELL##*/}rc + source ~/.${SHELL##*/}rc + +## 3. build submit image + + Command: + + cd compass-ci/container/submit + ./build + +## 4. add executable file + + Command: + + ln -s $CCI_SRC/container/submit/submit /usr/bin/submit + +# try it + + instruction: + + You can directly use the command 'submit' to submit jobs. + It is the same as you install the lkp-tests at your own server. + It will start a disposable container to submit your job. + + Example: + + submit -c -m testbox=vm-2p8g borrow-1h.yaml + + About summit: + + For detailed usage for command submit, reference to: [submit user manual](https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/manual/su… -- 2.23.0
2 2
0 0
[PATCH v3 compass-ci] doc/manual: add document for submit container
by Luan Shengde 03 Mar '21

03 Mar '21
Signed-off-by: Luan Shengde <shdluan(a)163.com> --- doc/manual/build-lkp-test-container.en.md | 57 +++++++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 doc/manual/build-lkp-test-container.en.md diff --git a/doc/manual/build-lkp-test-container.en.md b/doc/manual/build-lkp-test-container.en.md new file mode 100644 index 0000000..2852a49 --- /dev/null +++ b/doc/manual/build-lkp-test-container.en.md @@ -0,0 +1,57 @@ +# Preface + +We provide a docker container to suit various of Linux OS(es). +In this case you do not need install the lkp-tests to your own server. +Also you can avoid installation failures for undesired dependency packages. + +# Prepare + +- install docker +- apply account and config default yaml +- generate ssh keys + +# build submit container + +## 1. download resource + + Use the following command to download lkp-test and compass-ci + + git clone https://gitee.com/wu_fengguang/compass-ci.git + git clone https://gitee.com/wu_fengguang/lkp-tests.git + +## 2. setup environment variables + + Command: + + echo "export LKP_SRC=$PWD/lkp-tests" >> ~/.${SHELL##*/}rc + echo "export CCI_SRC=$PWD/compass-ci" >> ~/.${SHELL##*/}rc + source ~/.${SHELL##*/}rc + +## 3. build submit image + + Command: + + cd compass-ci/container/submit + ./build + +## 4. add executable file + + Command: + + ln -s $CCI_SRC/container/submit/submit /usr/bin/submit + +# try it + + instruction: + + You can directly use the command 'submit' to submit jobs. + It is the same as you install the lkp-tests on your own server. + It will start a disposable container to submit your job. + + Example: + + submit -c -m testbox=vm-2p8g borrow-1h.yaml + + About summit: + + For detailed usage for command submit, please reference to: [submit user manual](https://gitee.com/wu_fengguang/compass-ci/blob/master/doc/manual/su… -- 2.23.0
1 0
0 0
[PATCH v3 compass-ci 1/2] sparrow/4-docker/buildall: check whether skip ssh-r
by Liu Yinsi 03 Mar '21

03 Mar '21
[why] for user locally deploy compass-ci, ssh-r is not exists, so check when ssh-r already exists, skip ssh-r. Signed-off-by: Liu Yinsi <liuyinsi(a)163.com> --- sparrow/4-docker/buildall | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/sparrow/4-docker/buildall b/sparrow/4-docker/buildall index bb77a1f..aa48bce 100755 --- a/sparrow/4-docker/buildall +++ b/sparrow/4-docker/buildall @@ -56,7 +56,9 @@ do_one_run() mkdir $tmpdir/start_$container_name 2>/dev/null && ( cd "$container" - [ "$container_name" == 'ssh-r' ] && exit + container_id=$(docker ps -aqf name="ssh_r") + [ -n "$container_id" ] && exit + [ -x first-run ] && ./first-run [ -x start ] && ./start [ "$container_name" == 'initrd-lkp' ] && ./run -- 2.23.0
2 1
0 0
  • ← Newer
  • 1
  • ...
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • ...
  • 524
  • Older →

HyperKitty Powered by HyperKitty