Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
Xiaofei Tan (32): scsi: 53c700: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: ipr: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: lpfc: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: qla4xxx: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: BusLogic: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: a100u2w: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: a2091: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: a3000: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: aha1740: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: bfa: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: esp_scsi: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: gvp11: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: hptiop: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: ibmvscsi: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: initio: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: megaraid: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: mac53c94: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: mesh: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: mvumi: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: myrb: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: myrs: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: ncr53c8xx: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: nsp32: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: pmcraid: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: pcmcia: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: qlogicfas408: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: qlogicpti: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: sgiwd93: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: stex: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: vmw_pvscsi: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: wd719x: Replace spin_lock_irqsave with spin_lock in hard IRQ scsi: advansys: Replace spin_lock_irqsave with spin_lock in hard IRQ
drivers/scsi/53c700.c | 5 ++-- drivers/scsi/BusLogic.c | 5 ++-- drivers/scsi/a100u2w.c | 5 ++-- drivers/scsi/a2091.c | 5 ++-- drivers/scsi/a3000.c | 5 ++-- drivers/scsi/advansys.c | 5 ++-- drivers/scsi/aha1740.c | 5 ++-- drivers/scsi/bfa/bfad.c | 20 ++++++------- drivers/scsi/esp_scsi.c | 5 ++-- drivers/scsi/gvp11.c | 5 ++-- drivers/scsi/hptiop.c | 5 ++-- drivers/scsi/ibmvscsi/ibmvfc.c | 5 ++-- drivers/scsi/initio.c | 5 ++-- drivers/scsi/ipr.c | 21 ++++++------- drivers/scsi/lpfc/lpfc_sli.c | 49 +++++++++++++------------------ drivers/scsi/mac53c94.c | 5 ++-- drivers/scsi/megaraid.c | 10 +++---- drivers/scsi/megaraid/megaraid_sas_base.c | 5 ++-- drivers/scsi/mesh.c | 5 ++-- drivers/scsi/mvumi.c | 7 ++--- drivers/scsi/myrb.c | 20 +++++-------- drivers/scsi/myrs.c | 15 ++++------ drivers/scsi/ncr53c8xx.c | 5 ++-- drivers/scsi/nsp32.c | 5 ++-- drivers/scsi/pcmcia/sym53c500_cs.c | 5 ++-- drivers/scsi/pmcraid.c | 8 ++--- drivers/scsi/qla4xxx/ql4_isr.c | 15 ++++------ drivers/scsi/qlogicfas408.c | 5 ++-- drivers/scsi/qlogicpti.c | 5 ++-- drivers/scsi/sgiwd93.c | 5 ++-- drivers/scsi/stex.c | 16 +++++----- drivers/scsi/vmw_pvscsi.c | 4 +-- drivers/scsi/wd719x.c | 7 ++--- 33 files changed, 122 insertions(+), 175 deletions(-)
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/53c700.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/53c700.c b/drivers/scsi/53c700.c index 3242ff6..6c2ef46 100644 --- a/drivers/scsi/53c700.c +++ b/drivers/scsi/53c700.c @@ -1491,7 +1491,6 @@ NCR_700_intr(int irq, void *dev_id) __u8 istat; __u32 resume_offset = 0; __u8 pun = 0xff, lun = 0xff; - unsigned long flags; int handled = 0;
/* Use the host lock to serialise access to the 53c700 @@ -1499,7 +1498,7 @@ NCR_700_intr(int irq, void *dev_id) * lock to enter the done routines. When that happens, we * need to ensure that for this driver, the host lock and the * queue lock point to the same thing. */ - spin_lock_irqsave(host->host_lock, flags); + spin_lock(host->host_lock); if((istat = NCR_700_readb(host, ISTAT_REG)) & (SCSI_INT_PENDING | DMA_INT_PENDING)) { __u32 dsps; @@ -1748,7 +1747,7 @@ NCR_700_intr(int irq, void *dev_id) } } out_unlock: - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock(host->host_lock); return IRQ_RETVAL(handled); }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/ipr.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-)
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c index e451102..0309e8f 100644 --- a/drivers/scsi/ipr.c +++ b/drivers/scsi/ipr.c @@ -5815,7 +5815,6 @@ static irqreturn_t ipr_isr(int irq, void *devp) { struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp; struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg; - unsigned long hrrq_flags = 0; u32 int_reg = 0; int num_hrrq = 0; int irq_none = 0; @@ -5823,10 +5822,10 @@ static irqreturn_t ipr_isr(int irq, void *devp) irqreturn_t rc = IRQ_NONE; LIST_HEAD(doneq);
- spin_lock_irqsave(hrrq->lock, hrrq_flags); + spin_lock(hrrq->lock); /* If interrupts are disabled, ignore the interrupt */ if (!hrrq->allow_interrupts) { - spin_unlock_irqrestore(hrrq->lock, hrrq_flags); + spin_unlock(hrrq->lock); return IRQ_NONE; }
@@ -5862,7 +5861,7 @@ static irqreturn_t ipr_isr(int irq, void *devp) if (unlikely(rc == IRQ_NONE)) rc = ipr_handle_other_interrupt(ioa_cfg, int_reg);
- spin_unlock_irqrestore(hrrq->lock, hrrq_flags); + spin_unlock(hrrq->lock); list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) { list_del(&ipr_cmd->queue); del_timer(&ipr_cmd->timer); @@ -5883,16 +5882,15 @@ static irqreturn_t ipr_isr_mhrrq(int irq, void *devp) { struct ipr_hrr_queue *hrrq = (struct ipr_hrr_queue *)devp; struct ipr_ioa_cfg *ioa_cfg = hrrq->ioa_cfg; - unsigned long hrrq_flags = 0; struct ipr_cmnd *ipr_cmd, *temp; irqreturn_t rc = IRQ_NONE; LIST_HEAD(doneq);
- spin_lock_irqsave(hrrq->lock, hrrq_flags); + spin_lock(hrrq->lock);
/* If interrupts are disabled, ignore the interrupt */ if (!hrrq->allow_interrupts) { - spin_unlock_irqrestore(hrrq->lock, hrrq_flags); + spin_unlock(hrrq->lock); return IRQ_NONE; }
@@ -5900,7 +5898,7 @@ static irqreturn_t ipr_isr_mhrrq(int irq, void *devp) if ((be32_to_cpu(*hrrq->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) == hrrq->toggle_bit) { irq_poll_sched(&hrrq->iopoll); - spin_unlock_irqrestore(hrrq->lock, hrrq_flags); + spin_unlock(hrrq->lock); return IRQ_HANDLED; } } else { @@ -5911,7 +5909,7 @@ static irqreturn_t ipr_isr_mhrrq(int irq, void *devp) rc = IRQ_HANDLED; }
- spin_unlock_irqrestore(hrrq->lock, hrrq_flags); + spin_unlock(hrrq->lock);
list_for_each_entry_safe(ipr_cmd, temp, &doneq, queue) { list_del(&ipr_cmd->queue); @@ -10087,16 +10085,15 @@ static int ipr_request_other_msi_irqs(struct ipr_ioa_cfg *ioa_cfg, static irqreturn_t ipr_test_intr(int irq, void *devp) { struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp; - unsigned long lock_flags = 0; irqreturn_t rc = IRQ_HANDLED;
dev_info(&ioa_cfg->pdev->dev, "Received IRQ : %d\n", irq); - spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); + spin_lock(ioa_cfg->host->host_lock);
ioa_cfg->msi_received = 1; wake_up(&ioa_cfg->msi_wait_q);
- spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); + spin_unlock(ioa_cfg->host->host_lock); return rc; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/lpfc/lpfc_sli.c | 49 +++++++++++++++++++------------------------- 1 file changed, 21 insertions(+), 28 deletions(-)
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c index fa1a714..6928750 100644 --- a/drivers/scsi/lpfc/lpfc_sli.c +++ b/drivers/scsi/lpfc/lpfc_sli.c @@ -12792,7 +12792,6 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) uint32_t ha_copy, hc_copy; uint32_t work_ha_copy; unsigned long status; - unsigned long iflag; uint32_t control;
MAILBOX_t *mbox, *pmbox; @@ -12820,7 +12819,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) if (lpfc_intr_state_check(phba)) return IRQ_NONE; /* Need to read HA REG for slow-path events */ - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); if (lpfc_readl(phba->HAregaddr, &ha_copy)) goto unplug_error; /* If somebody is waiting to handle an eratt don't process it @@ -12843,7 +12842,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) * interrupt. */ if (unlikely(phba->hba_flag & DEFER_ERATT)) { - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); return IRQ_NONE; }
@@ -12858,7 +12857,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) phba->HAregaddr); writel(hc_copy, phba->HCregaddr); readl(phba->HAregaddr); /* flush */ - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); } else ha_copy = phba->ha_copy;
@@ -12871,14 +12870,14 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) * Turn off Link Attention interrupts * until CLEAR_LA done */ - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); phba->sli.sli_flag &= ~LPFC_PROCESS_LA; if (lpfc_readl(phba->HCregaddr, &control)) goto unplug_error; control &= ~HC_LAINT_ENA; writel(control, phba->HCregaddr); readl(phba->HCregaddr); /* flush */ - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); } else work_ha_copy &= ~HA_LATT; @@ -12893,7 +12892,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) (HA_RXMASK << (4*LPFC_ELS_RING))); status >>= (4*LPFC_ELS_RING); if (status & HA_RXMASK) { - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); if (lpfc_readl(phba->HCregaddr, &control)) goto unplug_error;
@@ -12923,10 +12922,10 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) (uint32_t)((unsigned long) &phba->work_waitq)); } - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); } } - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); if (work_ha_copy & HA_ERATT) { if (lpfc_sli_read_hs(phba)) goto unplug_error; @@ -12954,7 +12953,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) /* First check out the status word */ lpfc_sli_pcimem_bcopy(mbox, pmbox, sizeof(uint32_t)); if (pmbox->mbxOwner != OWN_HOST) { - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); /* * Stray Mailbox Interrupt, mbxCommand <cmd> * mbxStatus <status> @@ -12970,7 +12969,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) work_ha_copy &= ~HA_MBATT; } else { phba->sli.mbox_active = NULL; - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); phba->last_completion_time = jiffies; del_timer(&phba->sli.mbox_tmo); if (pmb->mbox_cmpl) { @@ -13026,14 +13025,10 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) goto send_current_mbox; } } - spin_lock_irqsave( - &phba->pport->work_port_lock, - iflag); + spin_lock(&phba->pport->work_port_lock); phba->pport->work_port_events &= ~WORKER_MBOX_TMO; - spin_unlock_irqrestore( - &phba->pport->work_port_lock, - iflag); + spin_unlock(&phba->pport->work_port_lock);
/* Do NOT queue MBX_HEARTBEAT to the worker * thread for processing. @@ -13051,7 +13046,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) } } } else - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock);
if ((work_ha_copy & HA_MBATT) && (phba->sli.mbox_active == NULL)) { @@ -13068,14 +13063,14 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id) "MBX_SUCCESS\n"); }
- spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); phba->work_ha |= work_ha_copy; - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); lpfc_worker_wake_up(phba); } return IRQ_HANDLED; unplug_error: - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); return IRQ_HANDLED;
} /* lpfc_sli_sp_intr_handler */ @@ -13105,7 +13100,6 @@ lpfc_sli_fp_intr_handler(int irq, void *dev_id) struct lpfc_hba *phba; uint32_t ha_copy; unsigned long status; - unsigned long iflag; struct lpfc_sli_ring *pring;
/* Get the driver's phba structure from the dev_id and @@ -13128,19 +13122,19 @@ lpfc_sli_fp_intr_handler(int irq, void *dev_id) if (lpfc_readl(phba->HAregaddr, &ha_copy)) return IRQ_HANDLED; /* Clear up only attention source related to fast-path */ - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); /* * If there is deferred error attention, do not check for * any interrupt. */ if (unlikely(phba->hba_flag & DEFER_ERATT)) { - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); return IRQ_NONE; } writel((ha_copy & (HA_R0_CLR_MSK | HA_R1_CLR_MSK)), phba->HAregaddr); readl(phba->HAregaddr); /* flush */ - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); } else ha_copy = phba->ha_copy;
@@ -14790,7 +14784,6 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id) struct lpfc_hba *phba; struct lpfc_hba_eq_hdl *hba_eq_hdl; struct lpfc_queue *fpeq; - unsigned long iflag; int ecount = 0; int hba_eqidx; struct lpfc_eq_intr_info *eqi; @@ -14813,11 +14806,11 @@ lpfc_sli4_hba_intr_handler(int irq, void *dev_id) /* Check device state for handling interrupt */ if (unlikely(lpfc_intr_state_check(phba))) { /* Check again for link_state with lock held */ - spin_lock_irqsave(&phba->hbalock, iflag); + spin_lock(&phba->hbalock); if (phba->link_state < LPFC_LINK_DOWN) /* Flush, clear interrupt, and rearm the EQ */ lpfc_sli4_eqcq_flush(phba, fpeq); - spin_unlock_irqrestore(&phba->hbalock, iflag); + spin_unlock(&phba->hbalock); return IRQ_NONE; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/qla4xxx/ql4_isr.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/qla4xxx/ql4_isr.c b/drivers/scsi/qla4xxx/ql4_isr.c index 6f0e77d..9c64c8b 100644 --- a/drivers/scsi/qla4xxx/ql4_isr.c +++ b/drivers/scsi/qla4xxx/ql4_isr.c @@ -1353,10 +1353,9 @@ qla4_8xxx_msi_handler(int irq, void *dev_id) static irqreturn_t qla4_83xx_mailbox_intr_handler(int irq, void *dev_id) { struct scsi_qla_host *ha = dev_id; - unsigned long flags; uint32_t ival = 0;
- spin_lock_irqsave(&ha->hardware_lock, flags); + spin_lock(&ha->hardware_lock);
ival = readl(&ha->qla4_83xx_reg->risc_intr); if (ival == 0) { @@ -1377,7 +1376,7 @@ static irqreturn_t qla4_83xx_mailbox_intr_handler(int irq, void *dev_id) writel(ival, &ha->qla4_83xx_reg->mb_int_mask); ha->isr_count++; exit: - spin_unlock_irqrestore(&ha->hardware_lock, flags); + spin_unlock(&ha->hardware_lock); return IRQ_HANDLED; }
@@ -1393,14 +1392,13 @@ irqreturn_t qla4_8xxx_default_intr_handler(int irq, void *dev_id) { struct scsi_qla_host *ha = dev_id; - unsigned long flags; uint32_t intr_status; uint8_t reqs_count = 0;
if (is_qla8032(ha) || is_qla8042(ha)) { qla4_83xx_mailbox_intr_handler(irq, dev_id); } else { - spin_lock_irqsave(&ha->hardware_lock, flags); + spin_lock(&ha->hardware_lock); while (1) { if (!(readl(&ha->qla4_82xx_reg->host_int) & ISRX_82XX_RISC_INT)) { @@ -1421,7 +1419,7 @@ qla4_8xxx_default_intr_handler(int irq, void *dev_id) break; } ha->isr_count++; - spin_unlock_irqrestore(&ha->hardware_lock, flags); + spin_unlock(&ha->hardware_lock); } return IRQ_HANDLED; } @@ -1430,11 +1428,10 @@ irqreturn_t qla4_8xxx_msix_rsp_q(int irq, void *dev_id) { struct scsi_qla_host *ha = dev_id; - unsigned long flags; int intr_status; uint32_t ival = 0;
- spin_lock_irqsave(&ha->hardware_lock, flags); + spin_lock(&ha->hardware_lock); if (is_qla8032(ha) || is_qla8042(ha)) { ival = readl(&ha->qla4_83xx_reg->iocb_int_mask); if (ival == 0) { @@ -1457,7 +1454,7 @@ qla4_8xxx_msix_rsp_q(int irq, void *dev_id) } ha->isr_count++; exit_msix_rsp_q: - spin_unlock_irqrestore(&ha->hardware_lock, flags); + spin_unlock(&ha->hardware_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/BusLogic.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c index ccb061a..005809e 100644 --- a/drivers/scsi/BusLogic.c +++ b/drivers/scsi/BusLogic.c @@ -2880,11 +2880,10 @@ static void blogic_process_ccbs(struct blogic_adapter *adapter) static irqreturn_t blogic_inthandler(int irq_ch, void *devid) { struct blogic_adapter *adapter = (struct blogic_adapter *) devid; - unsigned long processor_flag; /* Acquire exclusive access to Host Adapter. */ - spin_lock_irqsave(adapter->scsi_host->host_lock, processor_flag); + spin_lock(adapter->scsi_host->host_lock); /* Handle Interrupts appropriately for each Host Adapter type. */ @@ -2952,7 +2951,7 @@ static irqreturn_t blogic_inthandler(int irq_ch, void *devid) /* Release exclusive access to Host Adapter. */ - spin_unlock_irqrestore(adapter->scsi_host->host_lock, processor_flag); + spin_unlock(adapter->scsi_host->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/a100u2w.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/a100u2w.c b/drivers/scsi/a100u2w.c index 66c5143..bcb7de6 100644 --- a/drivers/scsi/a100u2w.c +++ b/drivers/scsi/a100u2w.c @@ -1058,12 +1058,11 @@ static irqreturn_t inia100_intr(int irqno, void *devid) { struct Scsi_Host *shost = (struct Scsi_Host *)devid; struct orc_host *host = (struct orc_host *)shost->hostdata; - unsigned long flags; irqreturn_t res;
- spin_lock_irqsave(shost->host_lock, flags); + spin_lock(shost->host_lock); res = orc_interrupt(host); - spin_unlock_irqrestore(shost->host_lock, flags); + spin_unlock(shost->host_lock);
return res; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/a2091.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/a2091.c b/drivers/scsi/a2091.c index 5853db3..5b23602 100644 --- a/drivers/scsi/a2091.c +++ b/drivers/scsi/a2091.c @@ -27,14 +27,13 @@ static irqreturn_t a2091_intr(int irq, void *data) struct Scsi_Host *instance = data; struct a2091_hostdata *hdata = shost_priv(instance); unsigned int status = hdata->regs->ISTR; - unsigned long flags;
if (!(status & (ISTR_INT_F | ISTR_INT_P)) || !(status & ISTR_INTS)) return IRQ_NONE;
- spin_lock_irqsave(instance->host_lock, flags); + spin_lock(instance->host_lock); wd33c93_intr(instance); - spin_unlock_irqrestore(instance->host_lock, flags); + spin_unlock(instance->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/a3000.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/a3000.c b/drivers/scsi/a3000.c index 86f1da2..ac063a5 100644 --- a/drivers/scsi/a3000.c +++ b/drivers/scsi/a3000.c @@ -28,14 +28,13 @@ static irqreturn_t a3000_intr(int irq, void *data) struct Scsi_Host *instance = data; struct a3000_hostdata *hdata = shost_priv(instance); unsigned int status = hdata->regs->ISTR; - unsigned long flags;
if (!(status & ISTR_INT_P)) return IRQ_NONE; if (status & ISTR_INTS) { - spin_lock_irqsave(instance->host_lock, flags); + spin_lock(instance->host_lock); wd33c93_intr(instance); - spin_unlock_irqrestore(instance->host_lock, flags); + spin_unlock(instance->host_lock); return IRQ_HANDLED; } pr_warn("Non-serviced A3000 SCSI-interrupt? ISTR = %02x\n", status);
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/aha1740.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/aha1740.c b/drivers/scsi/aha1740.c index 0dc8310..fd9787d 100644 --- a/drivers/scsi/aha1740.c +++ b/drivers/scsi/aha1740.c @@ -214,14 +214,13 @@ static irqreturn_t aha1740_intr_handle(int irq, void *dev_id) struct ecb *ecbptr; struct scsi_cmnd *SCtmp; unsigned int base; - unsigned long flags; int handled = 0; struct aha1740_sg *sgptr; struct eisa_device *edev; if (!host) panic("aha1740.c: Irq from unknown host!\n"); - spin_lock_irqsave(host->host_lock, flags); + spin_lock(host->host_lock); base = host->io_port; number_serviced = 0; edev = HOSTDATA(host)->edev; @@ -308,7 +307,7 @@ static irqreturn_t aha1740_intr_handle(int irq, void *dev_id) number_serviced++; }
- spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock(host->host_lock); return IRQ_RETVAL(handled); }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/bfa/bfad.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c index 440ef32..8c6f3eb 100644 --- a/drivers/scsi/bfa/bfad.c +++ b/drivers/scsi/bfa/bfad.c @@ -1089,25 +1089,24 @@ bfad_intx(int irq, void *dev_id) { struct bfad_s *bfad = dev_id; struct list_head doneq; - unsigned long flags; bfa_boolean_t rc;
- spin_lock_irqsave(&bfad->bfad_lock, flags); + spin_lock(&bfad->bfad_lock); rc = bfa_intx(&bfad->bfa); if (!rc) { - spin_unlock_irqrestore(&bfad->bfad_lock, flags); + spin_unlock(&bfad->bfad_lock); return IRQ_NONE; }
bfa_comp_deq(&bfad->bfa, &doneq); - spin_unlock_irqrestore(&bfad->bfad_lock, flags); + spin_unlock(&bfad->bfad_lock);
if (!list_empty(&doneq)) { bfa_comp_process(&bfad->bfa, &doneq);
- spin_lock_irqsave(&bfad->bfad_lock, flags); + spin_lock(&bfad->bfad_lock); bfa_comp_free(&bfad->bfa, &doneq); - spin_unlock_irqrestore(&bfad->bfad_lock, flags); + spin_unlock(&bfad->bfad_lock); }
return IRQ_HANDLED; @@ -1120,20 +1119,19 @@ bfad_msix(int irq, void *dev_id) struct bfad_msix_s *vec = dev_id; struct bfad_s *bfad = vec->bfad; struct list_head doneq; - unsigned long flags;
- spin_lock_irqsave(&bfad->bfad_lock, flags); + spin_lock(&bfad->bfad_lock);
bfa_msix(&bfad->bfa, vec->msix.entry); bfa_comp_deq(&bfad->bfa, &doneq); - spin_unlock_irqrestore(&bfad->bfad_lock, flags); + spin_unlock(&bfad->bfad_lock);
if (!list_empty(&doneq)) { bfa_comp_process(&bfad->bfa, &doneq);
- spin_lock_irqsave(&bfad->bfad_lock, flags); + spin_lock(&bfad->bfad_lock); bfa_comp_free(&bfad->bfa, &doneq); - spin_unlock_irqrestore(&bfad->bfad_lock, flags); + spin_unlock(&bfad->bfad_lock); }
return IRQ_HANDLED;
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/esp_scsi.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c index 007ccef..1e44fb5 100644 --- a/drivers/scsi/esp_scsi.c +++ b/drivers/scsi/esp_scsi.c @@ -2175,10 +2175,9 @@ static void __esp_interrupt(struct esp *esp) irqreturn_t scsi_esp_intr(int irq, void *dev_id) { struct esp *esp = dev_id; - unsigned long flags; irqreturn_t ret;
- spin_lock_irqsave(esp->host->host_lock, flags); + spin_lock(esp->host->host_lock); ret = IRQ_NONE; if (esp->ops->irq_pending(esp)) { ret = IRQ_HANDLED; @@ -2198,7 +2197,7 @@ irqreturn_t scsi_esp_intr(int irq, void *dev_id) break; } } - spin_unlock_irqrestore(esp->host->host_lock, flags); + spin_unlock(esp->host->host_lock);
return ret; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/gvp11.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/gvp11.c b/drivers/scsi/gvp11.c index 727f8c8..f164a68 100644 --- a/drivers/scsi/gvp11.c +++ b/drivers/scsi/gvp11.c @@ -29,14 +29,13 @@ static irqreturn_t gvp11_intr(int irq, void *data) struct Scsi_Host *instance = data; struct gvp11_hostdata *hdata = shost_priv(instance); unsigned int status = hdata->regs->CNTR; - unsigned long flags;
if (!(status & GVP11_DMAC_INT_PENDING)) return IRQ_NONE;
- spin_lock_irqsave(instance->host_lock, flags); + spin_lock(instance->host_lock); wd33c93_intr(instance); - spin_unlock_irqrestore(instance->host_lock, flags); + spin_unlock(instance->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/hptiop.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c index db4c7a7..5424e31 100644 --- a/drivers/scsi/hptiop.c +++ b/drivers/scsi/hptiop.c @@ -834,11 +834,10 @@ static irqreturn_t hptiop_intr(int irq, void *dev_id) { struct hptiop_hba *hba = dev_id; int handled; - unsigned long flags;
- spin_lock_irqsave(hba->host->host_lock, flags); + spin_lock(hba->host->host_lock); handled = hba->ops->iop_intr(hba); - spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock);
return handled; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/ibmvscsi/ibmvfc.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 755313b..9afe1b2 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -3604,12 +3604,11 @@ static struct ibmvfc_crq *ibmvfc_next_crq(struct ibmvfc_host *vhost) static irqreturn_t ibmvfc_interrupt(int irq, void *dev_instance) { struct ibmvfc_host *vhost = (struct ibmvfc_host *)dev_instance; - unsigned long flags;
- spin_lock_irqsave(vhost->host->host_lock, flags); + spin_lock(vhost->host->host_lock); vio_disable_interrupts(to_vio_dev(vhost->dev)); tasklet_schedule(&vhost->tasklet); - spin_unlock_irqrestore(vhost->host->host_lock, flags); + spin_unlock(vhost->host->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/initio.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/initio.c b/drivers/scsi/initio.c index 814acc5..3fafa8f 100644 --- a/drivers/scsi/initio.c +++ b/drivers/scsi/initio.c @@ -2507,12 +2507,11 @@ static int initio_wait_done_disc(struct initio_host * host) static irqreturn_t i91u_intr(int irqno, void *dev_id) { struct Scsi_Host *dev = dev_id; - unsigned long flags; int r; - spin_lock_irqsave(dev->host_lock, flags); + spin_lock(dev->host_lock); r = initio_isr((struct initio_host *)dev->hostdata); - spin_unlock_irqrestore(dev->host_lock, flags); + spin_unlock(dev->host_lock); if (r) return IRQ_HANDLED; else
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/megaraid.c | 10 ++++------ drivers/scsi/megaraid/megaraid_sas_base.c | 5 ++--- 2 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 80f5469..7151752 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -1262,7 +1262,6 @@ static irqreturn_t megaraid_isr_iomapped(int irq, void *devp) { adapter_t *adapter = devp; - unsigned long flags; u8 status; u8 nstatus; u8 completed[MAX_FIRMWARE_STATUS]; @@ -1273,7 +1272,7 @@ megaraid_isr_iomapped(int irq, void *devp) /* * loop till F/W has more commands for us to complete. */ - spin_lock_irqsave(&adapter->lock, flags); + spin_lock(&adapter->lock);
do { /* Check if a valid interrupt is pending */ @@ -1319,7 +1318,7 @@ megaraid_isr_iomapped(int irq, void *devp)
out_unlock:
- spin_unlock_irqrestore(&adapter->lock, flags); + spin_unlock(&adapter->lock);
return IRQ_RETVAL(handled); } @@ -1338,7 +1337,6 @@ static irqreturn_t megaraid_isr_memmapped(int irq, void *devp) { adapter_t *adapter = devp; - unsigned long flags; u8 status; u32 dword = 0; u8 nstatus; @@ -1349,7 +1347,7 @@ megaraid_isr_memmapped(int irq, void *devp) /* * loop till F/W has more commands for us to complete. */ - spin_lock_irqsave(&adapter->lock, flags); + spin_lock(&adapter->lock);
do { /* Check if a valid interrupt is pending */ @@ -1399,7 +1397,7 @@ megaraid_isr_memmapped(int irq, void *devp)
out_unlock:
- spin_unlock_irqrestore(&adapter->lock, flags); + spin_unlock(&adapter->lock);
return IRQ_RETVAL(handled); } diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c index 63a4f48..5c6bf61 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -3996,15 +3996,14 @@ static irqreturn_t megasas_isr(int irq, void *devp) { struct megasas_irq_context *irq_context = devp; struct megasas_instance *instance = irq_context->instance; - unsigned long flags; irqreturn_t rc;
if (atomic_read(&instance->fw_reset_no_pci_access)) return IRQ_HANDLED;
- spin_lock_irqsave(&instance->hba_lock, flags); + spin_lock(&instance->hba_lock); rc = megasas_deplete_reply_queue(instance, DID_OK); - spin_unlock_irqrestore(&instance->hba_lock, flags); + spin_unlock(&instance->hba_lock);
return rc; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/mac53c94.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/mac53c94.c b/drivers/scsi/mac53c94.c index 9e98977..ab59c79 100644 --- a/drivers/scsi/mac53c94.c +++ b/drivers/scsi/mac53c94.c @@ -182,12 +182,11 @@ static void mac53c94_start(struct fsc_state *state)
static irqreturn_t do_mac53c94_interrupt(int irq, void *dev_id) { - unsigned long flags; struct Scsi_Host *dev = ((struct fsc_state *) dev_id)->current_req->device->host; - spin_lock_irqsave(dev->host_lock, flags); + spin_lock(dev->host_lock); mac53c94_interrupt(irq, dev_id); - spin_unlock_irqrestore(dev->host_lock, flags); + spin_unlock(dev->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/mesh.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c index 0a9f4e4..67c660a 100644 --- a/drivers/scsi/mesh.c +++ b/drivers/scsi/mesh.c @@ -1018,13 +1018,12 @@ static void handle_reset(struct mesh_state *ms)
static irqreturn_t do_mesh_interrupt(int irq, void *dev_id) { - unsigned long flags; struct mesh_state *ms = dev_id; struct Scsi_Host *dev = ms->host; - spin_lock_irqsave(dev->host_lock, flags); + spin_lock(dev->host_lock); mesh_interrupt(ms); - spin_unlock_irqrestore(dev->host_lock, flags); + spin_unlock(dev->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/mvumi.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c index 71b6a1f..b36164c 100644 --- a/drivers/scsi/mvumi.c +++ b/drivers/scsi/mvumi.c @@ -1790,11 +1790,10 @@ static void mvumi_handle_clob(struct mvumi_hba *mhba) static irqreturn_t mvumi_isr_handler(int irq, void *devp) { struct mvumi_hba *mhba = (struct mvumi_hba *) devp; - unsigned long flags;
- spin_lock_irqsave(mhba->shost->host_lock, flags); + spin_lock(mhba->shost->host_lock); if (unlikely(mhba->instancet->clear_intr(mhba) || !mhba->global_isr)) { - spin_unlock_irqrestore(mhba->shost->host_lock, flags); + spin_unlock(mhba->shost->host_lock); return IRQ_NONE; }
@@ -1815,7 +1814,7 @@ static irqreturn_t mvumi_isr_handler(int irq, void *devp) mhba->isr_status = 0; if (mhba->fw_state == FW_STATE_STARTED) mvumi_handle_clob(mhba); - spin_unlock_irqrestore(mhba->shost->host_lock, flags); + spin_unlock(mhba->shost->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/myrb.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c index 3d8e91c..ecb57ee 100644 --- a/drivers/scsi/myrb.c +++ b/drivers/scsi/myrb.c @@ -2769,9 +2769,8 @@ static irqreturn_t DAC960_LA_intr_handler(int irq, void *arg) struct myrb_hba *cb = arg; void __iomem *base = cb->io_base; struct myrb_stat_mbox *next_stat_mbox; - unsigned long flags;
- spin_lock_irqsave(&cb->queue_lock, flags); + spin_lock(&cb->queue_lock); DAC960_LA_ack_intr(base); next_stat_mbox = cb->next_stat_mbox; while (next_stat_mbox->valid) { @@ -2806,7 +2805,7 @@ static irqreturn_t DAC960_LA_intr_handler(int irq, void *arg) } } cb->next_stat_mbox = next_stat_mbox; - spin_unlock_irqrestore(&cb->queue_lock, flags); + spin_unlock(&cb->queue_lock); return IRQ_HANDLED; }
@@ -3047,9 +3046,8 @@ static irqreturn_t DAC960_PG_intr_handler(int irq, void *arg) struct myrb_hba *cb = arg; void __iomem *base = cb->io_base; struct myrb_stat_mbox *next_stat_mbox; - unsigned long flags;
- spin_lock_irqsave(&cb->queue_lock, flags); + spin_lock(&cb->queue_lock); DAC960_PG_ack_intr(base); next_stat_mbox = cb->next_stat_mbox; while (next_stat_mbox->valid) { @@ -3082,7 +3080,7 @@ static irqreturn_t DAC960_PG_intr_handler(int irq, void *arg) myrb_handle_scsi(cb, cmd_blk, scmd); } cb->next_stat_mbox = next_stat_mbox; - spin_unlock_irqrestore(&cb->queue_lock, flags); + spin_unlock(&cb->queue_lock); return IRQ_HANDLED; }
@@ -3254,9 +3252,8 @@ static irqreturn_t DAC960_PD_intr_handler(int irq, void *arg) { struct myrb_hba *cb = arg; void __iomem *base = cb->io_base; - unsigned long flags;
- spin_lock_irqsave(&cb->queue_lock, flags); + spin_lock(&cb->queue_lock); while (DAC960_PD_hw_mbox_status_available(base)) { unsigned char id = DAC960_PD_read_status_cmd_ident(base); struct scsi_cmnd *scmd = NULL; @@ -3285,7 +3282,7 @@ static irqreturn_t DAC960_PD_intr_handler(int irq, void *arg) else myrb_handle_scsi(cb, cmd_blk, scmd); } - spin_unlock_irqrestore(&cb->queue_lock, flags); + spin_unlock(&cb->queue_lock); return IRQ_HANDLED; }
@@ -3420,9 +3417,8 @@ static irqreturn_t DAC960_P_intr_handler(int irq, void *arg) { struct myrb_hba *cb = arg; void __iomem *base = cb->io_base; - unsigned long flags;
- spin_lock_irqsave(&cb->queue_lock, flags); + spin_lock(&cb->queue_lock); while (DAC960_PD_hw_mbox_status_available(base)) { unsigned char id = DAC960_PD_read_status_cmd_ident(base); struct scsi_cmnd *scmd = NULL; @@ -3483,7 +3479,7 @@ static irqreturn_t DAC960_P_intr_handler(int irq, void *arg) else myrb_handle_scsi(cb, cmd_blk, scmd); } - spin_unlock_irqrestore(&cb->queue_lock, flags); + spin_unlock(&cb->queue_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/myrs.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/myrs.c b/drivers/scsi/myrs.c index 4adf9de..59379fc 100644 --- a/drivers/scsi/myrs.c +++ b/drivers/scsi/myrs.c @@ -2615,9 +2615,8 @@ static irqreturn_t DAC960_GEM_intr_handler(int irq, void *arg) struct myrs_hba *cs = arg; void __iomem *base = cs->io_base; struct myrs_stat_mbox *next_stat_mbox; - unsigned long flags;
- spin_lock_irqsave(&cs->queue_lock, flags); + spin_lock(&cs->queue_lock); DAC960_GEM_ack_intr(base); next_stat_mbox = cs->next_stat_mbox; while (next_stat_mbox->id > 0) { @@ -2654,7 +2653,7 @@ static irqreturn_t DAC960_GEM_intr_handler(int irq, void *arg) } } cs->next_stat_mbox = next_stat_mbox; - spin_unlock_irqrestore(&cs->queue_lock, flags); + spin_unlock(&cs->queue_lock); return IRQ_HANDLED; }
@@ -2865,9 +2864,8 @@ static irqreturn_t DAC960_BA_intr_handler(int irq, void *arg) struct myrs_hba *cs = arg; void __iomem *base = cs->io_base; struct myrs_stat_mbox *next_stat_mbox; - unsigned long flags;
- spin_lock_irqsave(&cs->queue_lock, flags); + spin_lock(&cs->queue_lock); DAC960_BA_ack_intr(base); next_stat_mbox = cs->next_stat_mbox; while (next_stat_mbox->id > 0) { @@ -2904,7 +2902,7 @@ static irqreturn_t DAC960_BA_intr_handler(int irq, void *arg) } } cs->next_stat_mbox = next_stat_mbox; - spin_unlock_irqrestore(&cs->queue_lock, flags); + spin_unlock(&cs->queue_lock); return IRQ_HANDLED; }
@@ -3115,9 +3113,8 @@ static irqreturn_t DAC960_LP_intr_handler(int irq, void *arg) struct myrs_hba *cs = arg; void __iomem *base = cs->io_base; struct myrs_stat_mbox *next_stat_mbox; - unsigned long flags;
- spin_lock_irqsave(&cs->queue_lock, flags); + spin_lock(&cs->queue_lock); DAC960_LP_ack_intr(base); next_stat_mbox = cs->next_stat_mbox; while (next_stat_mbox->id > 0) { @@ -3154,7 +3151,7 @@ static irqreturn_t DAC960_LP_intr_handler(int irq, void *arg) } } cs->next_stat_mbox = next_stat_mbox; - spin_unlock_irqrestore(&cs->queue_lock, flags); + spin_unlock(&cs->queue_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/ncr53c8xx.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/ncr53c8xx.c b/drivers/scsi/ncr53c8xx.c index c76e9f0..84e0bb6 100644 --- a/drivers/scsi/ncr53c8xx.c +++ b/drivers/scsi/ncr53c8xx.c @@ -8069,7 +8069,6 @@ static DEF_SCSI_QCMD(ncr53c8xx_queue_command)
irqreturn_t ncr53c8xx_intr(int irq, void *dev_id) { - unsigned long flags; struct Scsi_Host *shost = (struct Scsi_Host *)dev_id; struct host_data *host_data = (struct host_data *)shost->hostdata; struct ncb *np = host_data->ncb; @@ -8081,11 +8080,11 @@ irqreturn_t ncr53c8xx_intr(int irq, void *dev_id)
if (DEBUG_FLAGS & DEBUG_TINY) printk ("[");
- spin_lock_irqsave(&np->smp_lock, flags); + spin_lock(&np->smp_lock); ncr_exception(np); done_list = np->done_list; np->done_list = NULL; - spin_unlock_irqrestore(&np->smp_lock, flags); + spin_unlock(&np->smp_lock);
if (DEBUG_FLAGS & DEBUG_TINY) printk ("]\n");
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/nsp32.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/nsp32.c b/drivers/scsi/nsp32.c index e44b1a0..d927fde 100644 --- a/drivers/scsi/nsp32.c +++ b/drivers/scsi/nsp32.c @@ -1152,12 +1152,11 @@ static irqreturn_t do_nsp32_isr(int irq, void *dev_id) struct scsi_cmnd *SCpnt = data->CurrentSC; unsigned short auto_stat, irq_stat, trans_stat; unsigned char busmon, busphase; - unsigned long flags; int ret; int handled = 0; struct Scsi_Host *host = data->Host;
- spin_lock_irqsave(host->host_lock, flags); + spin_lock(host->host_lock);
/* * IRQ check, then enable IRQ mask @@ -1421,7 +1420,7 @@ static irqreturn_t do_nsp32_isr(int irq, void *dev_id) nsp32_write2(base, IRQ_CONTROL, 0);
out2: - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock(host->host_lock);
nsp32_dbg(NSP32_DEBUG_INTR, "exit");
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/pmcraid.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c index 834556e..5967284 100644 --- a/drivers/scsi/pmcraid.c +++ b/drivers/scsi/pmcraid.c @@ -4145,7 +4145,6 @@ static irqreturn_t pmcraid_isr_msix(int irq, void *dev_id) { struct pmcraid_isr_param *hrrq_vector; struct pmcraid_instance *pinstance; - unsigned long lock_flags; u32 intrs_val; int hrrq_id;
@@ -4170,12 +4169,9 @@ static irqreturn_t pmcraid_isr_msix(int irq, void *dev_id)
pmcraid_err("ISR: error interrupts: %x \ initiating reset\n", intrs_val); - spin_lock_irqsave(pinstance->host->host_lock, - lock_flags); + spin_lock(pinstance->host->host_lock); pmcraid_initiate_reset(pinstance); - spin_unlock_irqrestore( - pinstance->host->host_lock, - lock_flags); + spin_unlock(pinstance->host->host_lock); } /* If interrupt was as part of the ioa initialization, * clear it. Delete the timer and wakeup the
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/pcmcia/sym53c500_cs.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/pcmcia/sym53c500_cs.c b/drivers/scsi/pcmcia/sym53c500_cs.c index a366ff1..67719de 100644 --- a/drivers/scsi/pcmcia/sym53c500_cs.c +++ b/drivers/scsi/pcmcia/sym53c500_cs.c @@ -341,7 +341,6 @@ SYM53C500_pio_write(int fast_pio, int base, unsigned char *request, unsigned int static irqreturn_t SYM53C500_intr(int irq, void *dev_id) { - unsigned long flags; struct Scsi_Host *dev = dev_id; DEB(unsigned char fifo_size;) DEB(unsigned char seq_reg;) @@ -353,7 +352,7 @@ SYM53C500_intr(int irq, void *dev_id) struct scsi_cmnd *curSC = data->current_SC; int fast_pio = data->fast_pio;
- spin_lock_irqsave(dev->host_lock, flags); + spin_lock(dev->host_lock);
VDEB(printk("SYM53C500_intr called\n"));
@@ -487,7 +486,7 @@ SYM53C500_intr(int irq, void *dev_id) break; } out: - spin_unlock_irqrestore(dev->host_lock, flags); + spin_unlock(dev->host_lock); return IRQ_HANDLED;
idle_out:
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/qlogicfas408.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/qlogicfas408.c b/drivers/scsi/qlogicfas408.c index 136681a..2a70902 100644 --- a/drivers/scsi/qlogicfas408.c +++ b/drivers/scsi/qlogicfas408.c @@ -426,12 +426,11 @@ static void ql_ihandl(void *dev_id)
irqreturn_t qlogicfas408_ihandl(int irq, void *dev_id) { - unsigned long flags; struct Scsi_Host *host = dev_id;
- spin_lock_irqsave(host->host_lock, flags); + spin_lock(host->host_lock); ql_ihandl(dev_id); - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock(host->host_lock); return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/qlogicpti.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/qlogicpti.c b/drivers/scsi/qlogicpti.c index d84e218..078555e 100644 --- a/drivers/scsi/qlogicpti.c +++ b/drivers/scsi/qlogicpti.c @@ -1203,10 +1203,9 @@ static struct scsi_cmnd *qlogicpti_intr_handler(struct qlogicpti *qpti) static irqreturn_t qpti_intr(int irq, void *dev_id) { struct qlogicpti *qpti = dev_id; - unsigned long flags; struct scsi_cmnd *dq;
- spin_lock_irqsave(qpti->qhost->host_lock, flags); + spin_lock(qpti->qhost->host_lock); dq = qlogicpti_intr_handler(qpti);
if (dq != NULL) { @@ -1218,7 +1217,7 @@ static irqreturn_t qpti_intr(int irq, void *dev_id) dq = next; } while (dq != NULL); } - spin_unlock_irqrestore(qpti->qhost->host_lock, flags); + spin_unlock(qpti->qhost->host_lock);
return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/sgiwd93.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/sgiwd93.c b/drivers/scsi/sgiwd93.c index cf1030c..21165bd 100644 --- a/drivers/scsi/sgiwd93.c +++ b/drivers/scsi/sgiwd93.c @@ -53,11 +53,10 @@ struct hpc_chunk { static irqreturn_t sgiwd93_intr(int irq, void *dev_id) { struct Scsi_Host * host = dev_id; - unsigned long flags;
- spin_lock_irqsave(host->host_lock, flags); + spin_lock(host->host_lock); wd33c93_intr(host); - spin_unlock_irqrestore(host->host_lock, flags); + spin_unlock(host->host_lock);
return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/stex.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c index 1247120..1e797d1 100644 --- a/drivers/scsi/stex.c +++ b/drivers/scsi/stex.c @@ -886,9 +886,8 @@ static irqreturn_t stex_intr(int irq, void *__hba) struct st_hba *hba = __hba; void __iomem *base = hba->mmio_base; u32 data; - unsigned long flags;
- spin_lock_irqsave(hba->host->host_lock, flags); + spin_lock(hba->host->host_lock);
data = readl(base + ODBL);
@@ -897,14 +896,14 @@ static irqreturn_t stex_intr(int irq, void *__hba) writel(data, base + ODBL); readl(base + ODBL); /* flush */ stex_mu_intr(hba, data); - spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock); if (unlikely(data & MU_OUTBOUND_DOORBELL_REQUEST_RESET && hba->cardtype == st_shasta)) queue_work(hba->work_q, &hba->reset_work); return IRQ_HANDLED; }
- spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock);
return IRQ_NONE; } @@ -987,9 +986,8 @@ static irqreturn_t stex_ss_intr(int irq, void *__hba) struct st_hba *hba = __hba; void __iomem *base = hba->mmio_base; u32 data; - unsigned long flags;
- spin_lock_irqsave(hba->host->host_lock, flags); + spin_lock(hba->host->host_lock);
if (hba->cardtype == st_yel) { data = readl(base + YI2H_INT); @@ -997,7 +995,7 @@ static irqreturn_t stex_ss_intr(int irq, void *__hba) /* clear the interrupt */ writel(data, base + YI2H_INT_C); stex_ss_mu_intr(hba); - spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock); if (unlikely(data & SS_I2H_REQUEST_RESET)) queue_work(hba->work_q, &hba->reset_work); return IRQ_HANDLED; @@ -1011,14 +1009,14 @@ static irqreturn_t stex_ss_intr(int irq, void *__hba) writel((1 << 22), base + YH2I_INT); } stex_ss_mu_intr(hba); - spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock); if (unlikely(data & SS_I2H_REQUEST_RESET)) queue_work(hba->work_q, &hba->reset_work); return IRQ_HANDLED; } }
- spin_unlock_irqrestore(hba->host->host_lock, flags); + spin_unlock(hba->host->host_lock);
return IRQ_NONE; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/vmw_pvscsi.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/scsi/vmw_pvscsi.c b/drivers/scsi/vmw_pvscsi.c index 081f54a..2994b3a 100644 --- a/drivers/scsi/vmw_pvscsi.c +++ b/drivers/scsi/vmw_pvscsi.c @@ -1180,11 +1180,11 @@ static irqreturn_t pvscsi_isr(int irq, void *devp) struct pvscsi_adapter *adapter = devp; unsigned long flags;
- spin_lock_irqsave(&adapter->hw_lock, flags); + spin_lock(&adapter->hw_lock); pvscsi_process_completion_ring(adapter); if (adapter->use_msg && pvscsi_msg_pending(adapter)) queue_work(adapter->workqueue, &adapter->work); - spin_unlock_irqrestore(&adapter->hw_lock, flags); + spin_unlock(&adapter->hw_lock);
return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/wd719x.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/scsi/wd719x.c b/drivers/scsi/wd719x.c index edc8a13..ef372f3 100644 --- a/drivers/scsi/wd719x.c +++ b/drivers/scsi/wd719x.c @@ -657,10 +657,9 @@ static irqreturn_t wd719x_interrupt(int irq, void *dev_id) { struct wd719x *wd = dev_id; union wd719x_regs regs; - unsigned long flags; u32 SCB_out;
- spin_lock_irqsave(wd->sh->host_lock, flags); + spin_lock(wd->sh->host_lock); /* read SCB pointer back from card */ SCB_out = wd719x_readl(wd, WD719X_AMR_SCB_OUT); /* read all status info at once */ @@ -668,7 +667,7 @@ static irqreturn_t wd719x_interrupt(int irq, void *dev_id)
switch (regs.bytes.INT) { case WD719X_INT_NONE: - spin_unlock_irqrestore(wd->sh->host_lock, flags); + spin_unlock(wd->sh->host_lock); return IRQ_NONE; case WD719X_INT_LINKNOSTATUS: dev_err(&wd->pdev->dev, "linked command completed with no status\n"); @@ -705,7 +704,7 @@ static irqreturn_t wd719x_interrupt(int irq, void *dev_id) } /* clear interrupt so another can happen */ wd719x_writeb(wd, WD719X_AMR_INT_STATUS, WD719X_INT_NONE); - spin_unlock_irqrestore(wd->sh->host_lock, flags); + spin_unlock(wd->sh->host_lock);
return IRQ_HANDLED; }
It is redundant to do irqsave and irqrestore in hardIRQ context, where it has been in a irq-disabled context.
Signed-off-by: Xiaofei Tan tanxiaofei@huawei.com --- drivers/scsi/advansys.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/scsi/advansys.c b/drivers/scsi/advansys.c index ec56278..a80f5e2 100644 --- a/drivers/scsi/advansys.c +++ b/drivers/scsi/advansys.c @@ -7177,10 +7177,9 @@ static irqreturn_t advansys_interrupt(int irq, void *dev_id) struct Scsi_Host *shost = dev_id; struct asc_board *boardp = shost_priv(shost); irqreturn_t result = IRQ_NONE; - unsigned long flags;
ASC_DBG(2, "boardp 0x%p\n", boardp); - spin_lock_irqsave(shost->host_lock, flags); + spin_lock(shost->host_lock); if (ASC_NARROW_BOARD(boardp)) { if (AscIsIntPending(shost->io_port)) { result = IRQ_HANDLED; @@ -7195,7 +7194,7 @@ static irqreturn_t advansys_interrupt(int irq, void *dev_id) ASC_STATS(shost, interrupt); } } - spin_unlock_irqrestore(shost->host_lock, flags); + spin_unlock(shost->host_lock);
ASC_DBG(1, "end\n"); return result;
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
Please see also, https://lore.kernel.org/linux-scsi/89c5cb05cb844939ae684db0077f675f@h3c.com/
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Monday, February 8, 2021 8:57 PM To: tanxiaofei tanxiaofei@huawei.com Cc: jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Please see also, https://lore.kernel.org/linux-scsi/89c5cb05cb844939ae684db0077f675f@h3c.co m/ _______________________________________________ Linuxarm mailing list -- linuxarm@openeuler.org To unsubscribe
Thanks Barry
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Monday, February 8, 2021 8:57 PM To: tanxiaofei tanxiaofei@huawei.com Cc: jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Tuesday, February 9, 2021 6:06 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Monday, February 8, 2021 8:57 PM To: tanxiaofei tanxiaofei@huawei.com Cc: jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=e58aa3d2d0cc
genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your
Sorry for I didn't realize xiaofei had replied.
claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
Sounds like an implementation issue of m68k since IRQF_DISABLED has been totally removed.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
We used to have IRQF_DISABLED to support so-called "fast interrupt" to avoid this. But the concept has been totally removed. That is interesting if m68k still has this issue.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Thanks Barry
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true.
Sorry for I didn't realize xiaofei had replied.
I was referring to the claim in patch 00/32, i.e. that interrupt handlers only run when irqs are disabled.
If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
Sounds like an implementation issue of m68k since IRQF_DISABLED has been totally removed.
It's true that IRQF_DISABLED could be used to avoid the need for irq locks in interrupt handlers. So, if you want to remove irq locks from interrupt handlers, today you can't use IRQF_DISABLED to help you. So what?
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
We used to have IRQF_DISABLED to support so-called "fast interrupt" to avoid this.
But the concept has been totally removed. That is interesting if m68k still has this issue.
Prioritized interrupts are beneficial. Why would you want to avoid them?
Moreover, there's no reason to believe that m68k is the only platform that supports nested interrupts.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Wednesday, February 10, 2021 1:29 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=e58aa3d2d0cc
genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true.
Sorry for I didn't realize xiaofei had replied.
I was referring to the claim in patch 00/32, i.e. that interrupt handlers only run when irqs are disabled.
If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
Sounds like an implementation issue of m68k since IRQF_DISABLED has been totally removed.
It's true that IRQF_DISABLED could be used to avoid the need for irq locks in interrupt handlers. So, if you want to remove irq locks from interrupt handlers, today you can't use IRQF_DISABLED to help you. So what?
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
We used to have IRQF_DISABLED to support so-called "fast interrupt" to avoid this.
But the concept has been totally removed. That is interesting if m68k still has this issue.
Prioritized interrupts are beneficial. Why would you want to avoid them?
I doubt this is true as it has been already thought as unnecessary in Linux: https://lwn.net/Articles/380931/
Moreover, there's no reason to believe that m68k is the only platform that supports nested interrupts.
I doubt that is true as genirq is running understand the consumption that hardIRQ is running in irq-disabled context: "We run all handlers with interrupts disabled and expect them not to enable them. Warn when we catch one who does." https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
If it is, m68k is against the assumption of genirq.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Thanks Barry
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
> Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI > drivers. There are no function changes, but may speed up if > interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true.
Sorry for I didn't realize xiaofei had replied.
I was referring to the claim in patch 00/32, i.e. that interrupt handlers only run when irqs are disabled.
If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
Sounds like an implementation issue of m68k since IRQF_DISABLED has been totally removed.
It's true that IRQF_DISABLED could be used to avoid the need for irq locks in interrupt handlers. So, if you want to remove irq locks from interrupt handlers, today you can't use IRQF_DISABLED to help you. So what?
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
We used to have IRQF_DISABLED to support so-called "fast interrupt" to avoid this.
But the concept has been totally removed. That is interesting if m68k still has this issue.
Prioritized interrupts are beneficial. Why would you want to avoid them?
I doubt this is true as it has been already thought as unnecessary in Linux: https://lwn.net/Articles/380931/
The article you cited does not refute what I said about prioritized interrupts.
The article is about eliminating the distinction between fast and slow interrupt handlers.
The article says that some developers convinced Linus that, although minimal interrupt latency is desirable, is isn't strictly necessary.
The article also warns of stack overflow from arbitrarily deep slow interrupt nesting, but that's not what m68k does.
Moreover, there's no reason to believe that m68k is the only platform that supports nested interrupts.
I doubt that is true as genirq is running understand the consumption that hardIRQ is running in irq-disabled context:
I'm not going to guess whether other platforms might be affected -- you're supporting this patch so you will have to show that it is correct.
"We run all handlers with interrupts disabled and expect them not to enable them. Warn when we catch one who does." https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
If it is, m68k is against the assumption of genirq.
Interrupt handlers on m68k do not enable interrupts. If they did, you would see that warning fire. It doesn't fire. Try it.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Thanks Barry
-----Original Message----- From: Song Bao Hua (Barry Song) Sent: Tuesday, February 9, 2021 6:28 PM To: 'Finn Thain' fthain@telegraphics.com.au Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Tuesday, February 9, 2021 6:06 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Monday, February 8, 2021 8:57 PM To: tanxiaofei tanxiaofei@huawei.com Cc: jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen
too
often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/
?id=e58aa3d2d0cc
genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your
Sorry for I didn't realize xiaofei had replied.
claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
Sounds like an implementation issue of m68k since IRQF_DISABLED has been totally removed.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
We used to have IRQF_DISABLED to support so-called "fast interrupt" to avoid this. But the concept has been totally removed. That is interesting if m68k still has this issue.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Is the comment in sonic_interrupt() outdated according to this: m68k: irq: Remove IRQF_DISABLED https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... http://lkml.iu.edu/hypermail/linux/kernel/1109.2/01687.html
and this: genirq: Warn when handler enables interrupts https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
wouldn't genirq report a warning on m68k?
Thanks Barry
Thanks Barry
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Is the comment in sonic_interrupt() outdated according to this: m68k: irq: Remove IRQF_DISABLED https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... http://lkml.iu.edu/hypermail/linux/kernel/1109.2/01687.html
The removal of IRQF_DISABLED isn't relevant to this driver. Commit 77a42796786c ("m68k: Remove deprecated IRQF_DISABLED") did not disable interrupts, it just removed some code to enable them.
The code and comments in sonic_interrupt() are correct. You can confirm this for yourself quite easily using QEMU and a cross-compiler.
and this: genirq: Warn when handler enables interrupts https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
wouldn't genirq report a warning on m68k?
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
Thanks Barry
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Wednesday, February 10, 2021 5:16 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Is the comment in sonic_interrupt() outdated according to this: m68k: irq: Remove IRQF_DISABLED
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=77a4279
The removal of IRQF_DISABLED isn't relevant to this driver. Commit 77a42796786c ("m68k: Remove deprecated IRQF_DISABLED") did not disable interrupts, it just removed some code to enable them.
The code and comments in sonic_interrupt() are correct. You can confirm this for yourself quite easily using QEMU and a cross-compiler.
and this: genirq: Warn when handler enables interrupts
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=b738a50a
wouldn't genirq report a warning on m68k?
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true, how could spin_lock_irqsave() block the prioritized interrupts? Isn't arch_irqs_disabled() a status reflection of irq disable API?
Thanks Barry
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Is the comment in sonic_interrupt() outdated according to this: m68k: irq: Remove IRQF_DISABLED https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... http://lkml.iu.edu/hypermail/linux/kernel/1109.2/01687.html
The removal of IRQF_DISABLED isn't relevant to this driver. Commit 77a42796786c ("m68k: Remove deprecated IRQF_DISABLED") did not disable interrupts, it just removed some code to enable them.
The code and comments in sonic_interrupt() are correct. You can confirm this for yourself quite easily using QEMU and a cross-compiler.
and this: genirq: Warn when handler enables interrupts https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
wouldn't genirq report a warning on m68k?
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Thursday, February 11, 2021 10:07 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
Is the comment in sonic_interrupt() outdated according to this: m68k: irq: Remove IRQF_DISABLED
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=77a4279
The removal of IRQF_DISABLED isn't relevant to this driver. Commit 77a42796786c ("m68k: Remove deprecated IRQF_DISABLED") did not disable interrupts, it just removed some code to enable them.
The code and comments in sonic_interrupt() are correct. You can confirm this for yourself quite easily using QEMU and a cross-compiler.
and this: genirq: Warn when handler enables interrupts
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=b738a50a
wouldn't genirq report a warning on m68k?
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Hi Finn, Thanks for your explanation again.
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Thanks Barry
Thanks Barry
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Hi Finn, Thanks for your explanation again.
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
Thanks Barry
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Thursday, February 11, 2021 11:35 AM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Hi Finn, Thanks for your explanation again.
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
I think those links I share you have supported this. Just you don't believe :-)
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
I guess no:-) there are only two levels: IRQ and NMI. Injecting a high-prio IRQ level between them makes no sense.
To me, arm64's design is quite clear and has no any confusion.
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
The fact is that m68k lets arch_irq_disabled() return true to pretend all IRQs are disabled while high-priority IRQ is still open, thus "pass" all sanitizing check in genirq and kernel core.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
This feature could be used by calling local_irq_enable_in_hardirq() in those IRQ handlers who hope high-priority interrupts to preempt it for a while.
It shouldn't hide somewhere and make confusion.
On the other hand, those who care about realtime should use threaded IRQ and let IRQ threads preempt each other.
Thanks Barry
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
There is no warning from m68k builds. That's because arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Hi Finn, Thanks for your explanation again.
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
I think those links I share you have supported this. Just you don't believe :-)
Your links show that the distinction between fast and slow handlers was removed. Your links don't support your claim that "arch_irqs_disabled() should mean all interrupts have been masked". Where is the code that makes that inference? Where is the documentation that supports your claim?
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
I guess no:-) there are only two levels: IRQ and NMI. Injecting a high-prio IRQ level between them makes no sense.
To me, arm64's design is quite clear and has no any confusion.
Are you saying that the ARM64 hardware design is confusing because it implements a priority mask, and that's why you had to simplify it with a pseudo-nmi scheme in software?
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
The fact is that m68k lets arch_irq_disabled() return true to pretend all IRQs are disabled while high-priority IRQ is still open, thus "pass" all sanitizing check in genirq and kernel core.
The fact is that m68k has arch_irq_disabled() return false when all IRQs are enabled. So there is no bug.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
This feature could be used by calling local_irq_enable_in_hardirq() in those IRQ handlers who hope high-priority interrupts to preempt it for a while.
So, if one handler is sensitive to interrupt latency, all other handlers should be modified? I don't think that's workable.
In anycase, what you're describing is a completely different nested interrupt scheme that would defeat the priority level mechanism that the hardware provides us with.
It shouldn't hide somewhere and make confusion.
The problem is hiding so well that no-one has found it! I say it doesn't exist.
On the other hand, those who care about realtime should use threaded IRQ and let IRQ threads preempt each other.
Yes. And those threads also have priority levels.
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Thursday, February 11, 2021 2:12 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
> There is no warning from m68k builds. That's because > arch_irqs_disabled() returns true when the IPL is non-zero.
So for m68k, the case is arch_irqs_disabled() is true, but interrupts can still come?
Then it seems it is very confusing. If prioritized interrupts can still come while arch_irqs_disabled() is true,
Yes, on m68k CPUs, an IRQ having a priority level higher than the present priority mask will get serviced.
Non-Maskable Interrupt (NMI) is not subject to this rule and gets serviced regardless.
how could spin_lock_irqsave() block the prioritized interrupts?
It raises the the mask level to 7. Again, please see arch/m68k/include/asm/irqflags.h
Hi Finn, Thanks for your explanation again.
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
Isn't arch_irqs_disabled() a status reflection of irq disable API?
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
I think those links I share you have supported this. Just you don't believe :-)
Your links show that the distinction between fast and slow handlers was removed. Your links don't support your claim that "arch_irqs_disabled() should mean all interrupts have been masked". Where is the code that makes that inference? Where is the documentation that supports your claim?
(1) https://lwn.net/Articles/380931/ Looking at all these worries, one might well wonder if a system which *disabled interrupts for all handlers* would function well at all. So it is interesting to note one thing: any system which has the lockdep locking checker enabled has been running all handlers that way for some years now. Many developers and testers run lockdep-enabled kernels, and they are available for some of the more adventurous distributions (Rawhide, for example) as well. So we have quite a bit of test coverage for this mode of operation already.
(2) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
"We run all handlers *with interrupts disabled* and expect them not to enable them. Warn when we catch one who does."
(3) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers *with interrupts disabled*
Running interrupt handlers with interrupts enabled can cause stack overflows. That has been observed with multiqueue NICs delivering all their interrupts to a single core. We might band aid that somehow by checking the interrupt stacks, but the real safe fix is to *run the irq handlers with interrupts disabled*.
All these documents say we are running irq handler with interrupts disabled. but it seems you think high-prio interrupts don't belong to "interrupts" in those documents :-)
that is why we can't get agreement. I think "interrupts" mean all except NMI in these documents, but you insist high-prio IRQ is an exception.
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
I guess no:-) there are only two levels: IRQ and NMI. Injecting a high-prio IRQ level between them makes no sense.
To me, arm64's design is quite clear and has no any confusion.
Are you saying that the ARM64 hardware design is confusing because it implements a priority mask, and that's why you had to simplify it with a pseudo-nmi scheme in software?
No, I was not saying this. I think both m68k and arm64 have good hardware design. Just Linux's implementation is running irq-handlers with interrupts disabled. So ARM64's pseudo-nmi is adapted to Linux better.
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
The fact is that m68k lets arch_irq_disabled() return true to pretend all IRQs are disabled while high-priority IRQ is still open, thus "pass" all sanitizing check in genirq and kernel core.
The fact is that m68k has arch_irq_disabled() return false when all IRQs are enabled. So there is no bug.
But it has arch_irq_disabled() return true while some interrupts(not NMI) are still open.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
This feature could be used by calling local_irq_enable_in_hardirq() in those IRQ handlers who hope high-priority interrupts to preempt it for a while.
So, if one handler is sensitive to interrupt latency, all other handlers should be modified? I don't think that's workable.
I think we just enable preempt_rt or force threaded_irq, and then improve the priority of the irq thread who is sensitive to latency. No need to touch all threads.
I also understand your point, we let one high-prio interrupt preempt low priority interrupt, then we don't need to change the whole system. But I think Linux prefers the method of threaded_irq or preempt_rt for this kind of problems.
In anycase, what you're describing is a completely different nested interrupt scheme that would defeat the priority level mechanism that the hardware provides us with.
Yes. Indeed.
It shouldn't hide somewhere and make confusion.
The problem is hiding so well that no-one has found it! I say it doesn't exist.
Long long ago(before 2.6.38), we had a kernel supporting IRQF_DISABLED and nested interrupts were widely supported, but system also ran well in most cases. That means nested interrupts don't really matter in most cases. That is why m68k is also running well even though it is still nesting.
On the other hand, those who care about realtime should use threaded IRQ and let IRQ threads preempt each other.
Yes. And those threads also have priority levels.
Finn, I am not a m68k guy, would you help check if this could activate a warning on m68k. maybe we can discuss this question in genirq maillist from this warning if you are happy. Thanks very much.
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 7c9d6a2d7e90..b8ca27555c76 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -32,6 +32,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter() \ do { \ + WARN_ONCE(in_hardirq() && irqs_disabled(), "nested interrupts\n"); \ preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ account_hardirq_enter(current); \ @@ -44,6 +45,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter_raw() \ do { \ + WARN_ONCE(in_hardirq() && irqs_disabled(), "nested interrupts\n"); \ preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ } while (0)
Best Regards Barry
On Thu, 11 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
> Isn't arch_irqs_disabled() a status reflection of irq > disable API? >
Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
I think those links I share you have supported this. Just you don't believe :-)
Your links show that the distinction between fast and slow handlers was removed. Your links don't support your claim that "arch_irqs_disabled() should mean all interrupts have been masked". Where is the code that makes that inference? Where is the documentation that supports your claim?
(1) https://lwn.net/Articles/380931/ Looking at all these worries, one might well wonder if a system which *disabled interrupts for all handlers* would function well at all. So it is interesting to note one thing: any system which has the lockdep locking checker enabled has been running all handlers that way for some years now. Many developers and testers run lockdep-enabled kernels, and they are available for some of the more adventurous distributions (Rawhide, for example) as well. So we have quite a bit of test coverage for this mode of operation already.
IIUC, your claim is that CONFIG_LOCKDEP involves code that contains the inference, "arch_irqs_disabled() means all interrupts have been masked".
Unfortunately, m68k lacks CONFIG_LOCKDEP support so I can't easily confirm this. I suppose there may be other architectures that support both LOCKDEP and nested interrupts (?)
Anyway, if you would point to the code that contains said inference, that would help a lot.
(2) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
"We run all handlers *with interrupts disabled* and expect them not to enable them. Warn when we catch one who does."
Again, you don't see that warning because irqs_disabled() correctly returns true. You can confirm this fact in QEMU or Aranym if you are interested.
(3) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers *with interrupts disabled*
Running interrupt handlers with interrupts enabled can cause stack overflows. That has been observed with multiqueue NICs delivering all their interrupts to a single core. We might band aid that somehow by checking the interrupt stacks, but the real safe fix is to *run the irq handlers with interrupts disabled*.
Again, the stack overflow issue is not applicable. 68000 uses a priority mask, like ARM GIC. So there's no arbitrary nesting of interrupt handlers.
In practice stack overflows simply don't occur on m68k. Please do try it.
All these documents say we are running irq handler with interrupts disabled. but it seems you think high-prio interrupts don't belong to "interrupts" in those documents :-)
that is why we can't get agreement. I think "interrupts" mean all except NMI in these documents, but you insist high-prio IRQ is an exception.
We can't get agreement because you seek to remove functionality without justification.
Are all interrupts (including NMI) masked whenever arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
I guess no:-) there are only two levels: IRQ and NMI. Injecting a high-prio IRQ level between them makes no sense.
To me, arm64's design is quite clear and has no any confusion.
Are you saying that the ARM64 hardware design is confusing because it implements a priority mask, and that's why you had to simplify it with a pseudo-nmi scheme in software?
No, I was not saying this. I think both m68k and arm64 have good hardware design. Just Linux's implementation is running irq-handlers with interrupts disabled. So ARM64's pseudo-nmi is adapted to Linux better.
So, a platform should do what all the other platforms do because to deviate would be too dangerous?
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
The fact is that m68k lets arch_irq_disabled() return true to pretend all IRQs are disabled while high-priority IRQ is still open, thus "pass" all sanitizing check in genirq and kernel core.
The fact is that m68k has arch_irq_disabled() return false when all IRQs are enabled. So there is no bug.
But it has arch_irq_disabled() return true while some interrupts(not NMI) are still open.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
This feature could be used by calling local_irq_enable_in_hardirq() in those IRQ handlers who hope high-priority interrupts to preempt it for a while.
So, if one handler is sensitive to interrupt latency, all other handlers should be modified? I don't think that's workable.
I think we just enable preempt_rt or force threaded_irq, and then improve the priority of the irq thread who is sensitive to latency. No need to touch all threads.
I also understand your point, we let one high-prio interrupt preempt low priority interrupt, then we don't need to change the whole system. But I think Linux prefers the method of threaded_irq or preempt_rt for this kind of problems.
So, some interrupt (or exception) processing happens atomically and the rest is deferred to a different execution context. (Not a new idea.)
If you introduce latency in the former context you can't win it back in the latter. Your solution fails because it adds latency to high priority handlers.
In anycase, what you're describing is a completely different nested interrupt scheme that would defeat the priority level mechanism that the hardware provides us with.
Yes. Indeed.
It shouldn't hide somewhere and make confusion.
The problem is hiding so well that no-one has found it! I say it doesn't exist.
Long long ago(before 2.6.38), we had a kernel supporting IRQF_DISABLED and nested interrupts were widely supported, but system also ran well in most cases. That means nested interrupts don't really matter in most cases. That is why m68k is also running well even though it is still nesting.
No, m68k runs well because it uses priority masking. It is not because some cases are untested.
Your hardware may not have been around for 4 decades but it implements the same capability because the design is known to work.
On the other hand, those who care about realtime should use threaded IRQ and let IRQ threads preempt each other.
Yes. And those threads also have priority levels.
Finn, I am not a m68k guy, would you help check if this could activate a warning on m68k. maybe we can discuss this question in genirq maillist from this warning if you are happy. Thanks very much.
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 7c9d6a2d7e90..b8ca27555c76 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -32,6 +32,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter() \ do { \
WARN_ONCE(in_hardirq() && irqs_disabled(), "nested interrupts\n"); \ preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ account_hardirq_enter(current); \
@@ -44,6 +45,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter_raw() \ do { \
WARN_ONCE(in_hardirq() && irqs_disabled(), "nested interrupts\n"); \ preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ } while (0)
If you think that lockdep or some other code somewhere should be protected in this way, perhaps you can point to that code. Otherwise, your patch seems to lack any justification.
Best Regards Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Friday, February 12, 2021 12:58 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Thu, 11 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Wed, 10 Feb 2021, Song Bao Hua (Barry Song) wrote:
TBH, that is why m68k is so confusing. irqs_disabled() on m68k should just reflect the status of all interrupts have been disabled except NMI.
irqs_disabled() should be consistent with the calling of APIs such as local_irq_disable, local_irq_save, spin_lock_irqsave etc.
When irqs_disabled() returns true, we cannot infer that arch_local_irq_disable() was called. But I have not yet found driver code or core kernel code attempting that inference.
> > > Isn't arch_irqs_disabled() a status reflection of irq > > disable API? > > > > Why not?
If so, arch_irqs_disabled() should mean all interrupts have been masked except NMI as NMI is unmaskable.
Can you support that claim with a reference to core kernel code or documentation? (If some arch code agrees with you, that's neither here nor there.)
I think those links I share you have supported this. Just you don't believe :-)
Your links show that the distinction between fast and slow handlers was removed. Your links don't support your claim that "arch_irqs_disabled() should mean all interrupts have been masked". Where is the code that makes that inference? Where is the documentation that supports your claim?
(1) https://lwn.net/Articles/380931/ Looking at all these worries, one might well wonder if a system which *disabled interrupts for all handlers* would function well at all. So it is interesting to note one thing: any system which has the lockdep locking checker enabled has been running all handlers that way for some years now. Many developers and testers run lockdep-enabled kernels, and they are available for some of the more adventurous distributions (Rawhide, for example) as well. So we have quite a bit of test coverage for this mode of operation already.
IIUC, your claim is that CONFIG_LOCKDEP involves code that contains the inference, "arch_irqs_disabled() means all interrupts have been masked".
Unfortunately, m68k lacks CONFIG_LOCKDEP support so I can't easily confirm this. I suppose there may be other architectures that support both LOCKDEP and nested interrupts (?)
Anyway, if you would point to the code that contains said inference, that would help a lot.
(2)
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=b738a50a
"We run all handlers *with interrupts disabled* and expect them not to enable them. Warn when we catch one who does."
Again, you don't see that warning because irqs_disabled() correctly returns true. You can confirm this fact in QEMU or Aranym if you are interested.
(3)
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=e58aa3d2d0cc
genirq: Run irq handlers *with interrupts disabled*
Running interrupt handlers with interrupts enabled can cause stack overflows. That has been observed with multiqueue NICs delivering all their interrupts to a single core. We might band aid that somehow by checking the interrupt stacks, but the real safe fix is to *run the irq handlers with interrupts disabled*.
Again, the stack overflow issue is not applicable. 68000 uses a priority mask, like ARM GIC. So there's no arbitrary nesting of interrupt handlers.
In practice stack overflows simply don't occur on m68k. Please do try it.
All these documents say we are running irq handler with interrupts disabled. but it seems you think high-prio interrupts don't belong to "interrupts" in those documents :-)
that is why we can't get agreement. I think "interrupts" mean all except NMI in these documents, but you insist high-prio IRQ is an exception.
We can't get agreement because you seek to remove functionality without justification.
> > Are all interrupts (including NMI) masked whenever > arch_irqs_disabled() returns true on your platforms?
On my platform, once irqs_disabled() is true, all interrupts are masked except NMI. NMI just ignore spin_lock_irqsave or local_irq_disable.
On ARM64, we also have high-priority interrupts, but they are running as PESUDO_NMI: https://lwn.net/Articles/755906/
A glance at the ARM GIC specification suggests that your hardware works much like 68000 hardware.
When enabled, a CPU interface takes the highest priority pending interrupt for its connected processor and determines whether the interrupt has sufficient priority for it to signal the interrupt request to the processor. [...]
When the processor acknowledges the interrupt at the CPU interface, the Distributor changes the status of the interrupt from pending to either active, or active and pending. At this point the CPU interface can signal another interrupt to the processor, to preempt interrupts that are active on the processor. If there is no pending interrupt with sufficient priority for signaling to the processor, the interface deasserts the interrupt request signal to the processor.
https://developer.arm.com/documentation/ihi0048/b/
Have you considered that Linux/arm might benefit if it could fully exploit hardware features already available, such as the interrupt priority masking feature in the GIC in existing arm systems?
I guess no:-) there are only two levels: IRQ and NMI. Injecting a high-prio IRQ level between them makes no sense.
To me, arm64's design is quite clear and has no any confusion.
Are you saying that the ARM64 hardware design is confusing because it implements a priority mask, and that's why you had to simplify it with a pseudo-nmi scheme in software?
No, I was not saying this. I think both m68k and arm64 have good hardware design. Just Linux's implementation is running irq-handlers with interrupts disabled. So ARM64's pseudo-nmi is adapted to Linux better.
So, a platform should do what all the other platforms do because to deviate would be too dangerous?
On m68k, it seems you mean: irq_disabled() is true, but high-priority interrupts can still come; local_irq_disable() can disable high-priority interrupts, and at that time, irq_disabled() is also true.
TBH, this is wrong and confusing on m68k.
Like you, I was surprised when I learned about it. But that doesn't mean it's wrong. The fact that it works should tell you something.
The fact is that m68k lets arch_irq_disabled() return true to pretend all IRQs are disabled while high-priority IRQ is still open, thus "pass" all sanitizing check in genirq and kernel core.
The fact is that m68k has arch_irq_disabled() return false when all IRQs are enabled. So there is no bug.
But it has arch_irq_disabled() return true while some interrupts(not NMI) are still open.
Things could always be made simpler. But discarding features isn't necessarily an improvement.
This feature could be used by calling local_irq_enable_in_hardirq() in those IRQ handlers who hope high-priority interrupts to preempt it for a while.
So, if one handler is sensitive to interrupt latency, all other handlers should be modified? I don't think that's workable.
I think we just enable preempt_rt or force threaded_irq, and then improve the priority of the irq thread who is sensitive to latency. No need to touch all threads.
I also understand your point, we let one high-prio interrupt preempt low priority interrupt, then we don't need to change the whole system. But I think Linux prefers the method of threaded_irq or preempt_rt for this kind of problems.
So, some interrupt (or exception) processing happens atomically and the rest is deferred to a different execution context. (Not a new idea.)
If you introduce latency in the former context you can't win it back in the latter. Your solution fails because it adds latency to high priority handlers.
In anycase, what you're describing is a completely different nested interrupt scheme that would defeat the priority level mechanism that the hardware provides us with.
Yes. Indeed.
It shouldn't hide somewhere and make confusion.
The problem is hiding so well that no-one has found it! I say it doesn't exist.
Long long ago(before 2.6.38), we had a kernel supporting IRQF_DISABLED and nested interrupts were widely supported, but system also ran well in most cases. That means nested interrupts don't really matter in most cases. That is why m68k is also running well even though it is still nesting.
No, m68k runs well because it uses priority masking. It is not because some cases are untested.
Your hardware may not have been around for 4 decades but it implements the same capability because the design is known to work.
On the other hand, those who care about realtime should use threaded IRQ and let IRQ threads preempt each other.
Yes. And those threads also have priority levels.
Finn, I am not a m68k guy, would you help check if this could activate a warning on m68k. maybe we can discuss this question in genirq maillist from this warning if you are happy. Thanks very much.
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h index 7c9d6a2d7e90..b8ca27555c76 100644 --- a/include/linux/hardirq.h +++ b/include/linux/hardirq.h @@ -32,6 +32,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter() \ do { \
WARN_ONCE(in_hardirq() && irqs_disabled(), "nested
interrupts\n"); \
preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ account_hardirq_enter(current); \
@@ -44,6 +45,7 @@ static __always_inline void rcu_irq_enter_check_tick(void) */ #define __irq_enter_raw() \ do { \
WARN_ONCE(in_hardirq() && irqs_disabled(), "nested
interrupts\n"); \
preempt_count_add(HARDIRQ_OFFSET); \ lockdep_hardirq_enter(); \ } while (0)
If you think that lockdep or some other code somewhere should be protected in this way, perhaps you can point to that code. Otherwise, your patch seems to lack any justification.
Ok. I am not say this patch is right. I was actually expecting a log to start the discussion in genirq. Anyway, I think this is a very important problem we need to clarify in genirq. If irq handlers are able to run with some high-prio interrupts enabled on some platform like m68k, we need somewhere to document this is a case somewhere.
Anyway, I think it is important to clarify this thoroughly in genirq by starting a discussion with related guys.
Best Regards Barry
Thanks Barry
Hi Finn,
On 2021/2/9 13:06, Finn Thain wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Monday, February 8, 2021 8:57 PM To: tanxiaofei tanxiaofei@huawei.com Cc: jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
According to your discussion with Barry, it seems that m68k is a little different from other architecture, and this kind of modification of this patch cannot be applied to m68k. So, could help to point out which driver belong to m68k architecture in this patch set of SCSI? I can remove them.
BTW, sonic_interrupt() is from net driver natsemi, right? It would be appreciative if only discuss SCSI drivers in this patch set. thanks.
.
On Thu, 18 Feb 2021, Xiaofei Tan wrote:
On 2021/2/9 13:06, Finn Thain wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
According to your discussion with Barry, it seems that m68k is a little different from other architecture, and this kind of modification of this patch cannot be applied to m68k. So, could help to point out which driver belong to m68k architecture in this patch set of SCSI? I can remove them.
If you would claim that "there are no function changes" in your patches (as above) then the onus is on you to support that claim.
I assume that there are some platforms on which your assumptions hold.
With regard to drivers for those platforms, you might want to explain why your patches should be applied there, given that the existing code is superior for being more portable.
BTW, sonic_interrupt() is from net driver natsemi, right? It would be appreciative if only discuss SCSI drivers in this patch set. thanks.
The 'net' subsystem does have some different requirements than the 'scsi' subsystem. But I don't see how that's relevant. Perhaps you can explain it. Thanks.
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Saturday, February 20, 2021 6:18 PM To: tanxiaofei tanxiaofei@huawei.com Cc: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: Re: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Thu, 18 Feb 2021, Xiaofei Tan wrote:
On 2021/2/9 13:06, Finn Thain wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=e58aa3d2d0cc
genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
According to your discussion with Barry, it seems that m68k is a little different from other architecture, and this kind of modification of this patch cannot be applied to m68k. So, could help to point out which driver belong to m68k architecture in this patch set of SCSI? I can remove them.
If you would claim that "there are no function changes" in your patches (as above) then the onus is on you to support that claim.
I assume that there are some platforms on which your assumptions hold.
With regard to drivers for those platforms, you might want to explain why your patches should be applied there, given that the existing code is superior for being more portable.
I don't think it has nothing to do with portability. In the case of sonic_interrupt() you pointed out, on m68k, there is a high-priority interrupt can preempt low-priority interrupt, they will result in access the same critical data. M68K's spin_lock_irqsave() can disable the high-priority interrupt and avoid the race condition of the data. So the case should not be touched. I'd like to accept the reality and leave sonic_interrupt() alone.
However, even on m68k, spin_lock_irqsave is not needed for other ordinary cases. If there is no other irq handler coming to access same critical data, it is pointless to hold a redundant irqsave lock in irqhandler even on m68k.
In thread contexts, we always need that if an irqhandler can preempt those threads and access the same data. In hardirq, if there is an high-priority which can jump out on m68k to access the critical data which needs protection, we use the spin_lock_irqsave as you have used in sonic_interrupt(). Otherwise, the irqsave is also redundant for m68k.
BTW, sonic_interrupt() is from net driver natsemi, right? It would be appreciative if only discuss SCSI drivers in this patch set. thanks.
The 'net' subsystem does have some different requirements than the 'scsi' subsystem. But I don't see how that's relevant. Perhaps you can explain it. Thanks.
The difference is that if there are two co-existing interrupts which can access the same critical data on m68k. I don't think net and scsi matter. What really matters is the specific driver.
Thanks Barry
On Mon, 22 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Thu, 18 Feb 2021, Xiaofei Tan wrote:
On 2021/2/9 13:06, Finn Thain wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
> Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI > drivers. There are no function changes, but may speed up if > interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Were you able to measure any benefit from this change on some other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i... genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
According to your discussion with Barry, it seems that m68k is a little different from other architecture, and this kind of modification of this patch cannot be applied to m68k. So, could help to point out which driver belong to m68k architecture in this patch set of SCSI? I can remove them.
If you would claim that "there are no function changes" in your patches (as above) then the onus is on you to support that claim.
I assume that there are some platforms on which your assumptions hold.
With regard to drivers for those platforms, you might want to explain why your patches should be applied there, given that the existing code is superior for being more portable.
I don't think it has nothing to do with portability. In the case of sonic_interrupt() you pointed out, on m68k, there is a high-priority interrupt can preempt low-priority interrupt, they will result in access the same critical data. M68K's spin_lock_irqsave() can disable the high-priority interrupt and avoid the race condition of the data. So the case should not be touched. I'd like to accept the reality and leave sonic_interrupt() alone.
However, even on m68k, spin_lock_irqsave is not needed for other ordinary cases. If there is no other irq handler coming to access same critical data, it is pointless to hold a redundant irqsave lock in irqhandler even on m68k.
In thread contexts, we always need that if an irqhandler can preempt those threads and access the same data. In hardirq, if there is an high-priority which can jump out on m68k to access the critical data which needs protection, we use the spin_lock_irqsave as you have used in sonic_interrupt(). Otherwise, the irqsave is also redundant for m68k.
BTW, sonic_interrupt() is from net driver natsemi, right? It would be appreciative if only discuss SCSI drivers in this patch set. thanks.
The 'net' subsystem does have some different requirements than the 'scsi' subsystem. But I don't see how that's relevant. Perhaps you can explain it. Thanks.
The difference is that if there are two co-existing interrupts which can access the same critical data on m68k. I don't think net and scsi matter. What really matters is the specific driver.
Regarding m68k, your analysis overlooks the timing issue. E.g. patch 11/32 could be a problem because removing the irqsave would allow PDMA transfers to be interrupted. Aside from the timing issues, I agree with your analysis above regarding m68k.
With regard to other architectures and platforms, in specific cases, e.g. where there's never more than one IRQ involved, then I could agree that your assumptions probably hold and an irqsave would be probably redundant.
When you find a redundant irqsave, to actually patch it would bring a risk of regression with little or no reward. It's not my place to veto this entire patch series on that basis but IMO this kind of churn is misguided.
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Tuesday, February 23, 2021 6:25 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: RE: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Mon, 22 Feb 2021, Song Bao Hua (Barry Song) wrote:
On Thu, 18 Feb 2021, Xiaofei Tan wrote:
On 2021/2/9 13:06, Finn Thain wrote:
On Tue, 9 Feb 2021, Song Bao Hua (Barry Song) wrote:
> On Sun, 7 Feb 2021, Xiaofei Tan wrote: > > > Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI > > drivers. There are no function changes, but may speed up if > > interrupt happen too often. > > This change doesn't necessarily work on platforms that support > nested interrupts. > > Were you able to measure any benefit from this change on some > other platform?
I think the code disabling irq in hardIRQ is simply wrong. Since this commit
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/ ?id=e58aa3d2d0cc
genirq: Run irq handlers with interrupts disabled
interrupt handlers are definitely running in a irq-disabled context unless irq handlers enable them explicitly in the handler to permit other interrupts.
Repeating the same claim does not somehow make it true. If you put your claim to the test, you'll see that that interrupts are not disabled on m68k when interrupt handlers execute.
The Interrupt Priority Level (IPL) can prevent any given irq handler from being re-entered, but an irq with a higher priority level may be handled during execution of a lower priority irq handler.
sonic_interrupt() uses an irq lock within an interrupt handler to avoid issues relating to this. This kind of locking may be needed in the drivers you are trying to patch. Or it might not. Apparently, no-one has looked.
According to your discussion with Barry, it seems that m68k is a little different from other architecture, and this kind of modification of this patch cannot be applied to m68k. So, could help to point out which driver belong to m68k architecture in this patch set of SCSI? I can remove them.
If you would claim that "there are no function changes" in your patches (as above) then the onus is on you to support that claim.
I assume that there are some platforms on which your assumptions hold.
With regard to drivers for those platforms, you might want to explain why your patches should be applied there, given that the existing code is superior for being more portable.
I don't think it has nothing to do with portability. In the case of sonic_interrupt() you pointed out, on m68k, there is a high-priority interrupt can preempt low-priority interrupt, they will result in access the same critical data. M68K's spin_lock_irqsave() can disable the high-priority interrupt and avoid the race condition of the data. So the case should not be touched. I'd like to accept the reality and leave sonic_interrupt() alone.
However, even on m68k, spin_lock_irqsave is not needed for other ordinary cases. If there is no other irq handler coming to access same critical data, it is pointless to hold a redundant irqsave lock in irqhandler even on m68k.
In thread contexts, we always need that if an irqhandler can preempt those threads and access the same data. In hardirq, if there is an high-priority which can jump out on m68k to access the critical data which needs protection, we use the spin_lock_irqsave as you have used in sonic_interrupt(). Otherwise, the irqsave is also redundant for m68k.
BTW, sonic_interrupt() is from net driver natsemi, right? It would be appreciative if only discuss SCSI drivers in this patch set. thanks.
The 'net' subsystem does have some different requirements than the 'scsi' subsystem. But I don't see how that's relevant. Perhaps you can explain it. Thanks.
The difference is that if there are two co-existing interrupts which can access the same critical data on m68k. I don't think net and scsi matter. What really matters is the specific driver.
Regarding m68k, your analysis overlooks the timing issue. E.g. patch 11/32 could be a problem because removing the irqsave would allow PDMA transfers to be interrupted. Aside from the timing issues, I agree with your analysis above regarding m68k.
You mentioned you need realtime so you want an interrupt to be able to preempt another one. Now you said you want an interrupt not to be preempted as it will make a timing issue. If this PDMA transfer will have some problem when it is preempted, I believe we need some enhanced ways to handle this, otherwise, once we enable preempt_rt or threaded_irq, it will get the timing issue. so here it needs a clear comment and IRQF_NO_THREAD if this is the case.
With regard to other architectures and platforms, in specific cases, e.g. where there's never more than one IRQ involved, then I could agree that your assumptions probably hold and an irqsave would be probably redundant.
When you find a redundant irqsave, to actually patch it would bring a risk of regression with little or no reward. It's not my place to veto this entire patch series on that basis but IMO this kind of churn is misguided.
Nope.
I would say the real misguidance is that the code adds one lock while it doesn't need the lock. Easily we can add redundant locks or exaggerate the coverage range of locks, but the smarter way is that people add locks only when they really need the lock by considering concurrency and realtime performance.
Thanks Barry
On Tue, 23 Feb 2021, Song Bao Hua (Barry Song) wrote:
Regarding m68k, your analysis overlooks the timing issue. E.g. patch 11/32 could be a problem because removing the irqsave would allow PDMA transfers to be interrupted. Aside from the timing issues, I agree with your analysis above regarding m68k.
You mentioned you need realtime so you want an interrupt to be able to preempt another one.
That's not what I said. But for the sake of discussion, yes, I do know people who run Linux on ARM hardware (if Android vendor kernels can be called "Linux") and who would benefit from realtime support on those devices.
Now you said you want an interrupt not to be preempted as it will make a timing issue.
mac_esp deliberately constrains segment sizes so that it can harmlessly disable interrupts for the duration of the transfer.
Maybe the irqsave in this driver is over-cautious. Who knows? The PDMA timing problem relates to SCSI bus signalling and the tolerance of real- world SCSI devices to same. The other problem is that the PDMA logic circuit is undocumented hardware. So there may be further timing requirements lurking there. Therefore, patch 11/32 is too risky.
If this PDMA transfer will have some problem when it is preempted, I believe we need some enhanced ways to handle this, otherwise, once we enable preempt_rt or threaded_irq, it will get the timing issue. so here it needs a clear comment and IRQF_NO_THREAD if this is the case.
People who require fast response times cannot expect random drivers or platforms to meet such requirements. I fear you may be asking too much from Mac Quadra machines.
With regard to other architectures and platforms, in specific cases, e.g. where there's never more than one IRQ involved, then I could agree that your assumptions probably hold and an irqsave would be probably redundant.
When you find a redundant irqsave, to actually patch it would bring a risk of regression with little or no reward. It's not my place to veto this entire patch series on that basis but IMO this kind of churn is misguided.
Nope.
I would say the real misguidance is that the code adds one lock while it doesn't need the lock. Easily we can add redundant locks or exaggerate the coverage range of locks, but the smarter way is that people add locks only when they really need the lock by considering concurrency and realtime performance.
You appear to be debating a strawman. No-one is advocating excessive locking in new code.
Thanks Barry
-----Original Message----- From: Finn Thain [mailto:fthain@telegraphics.com.au] Sent: Wednesday, February 24, 2021 6:21 PM To: Song Bao Hua (Barry Song) song.bao.hua@hisilicon.com Cc: tanxiaofei tanxiaofei@huawei.com; jejb@linux.ibm.com; martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-kernel@vger.kernel.org; linuxarm@openeuler.org; linux-m68k@vger.kernel.org Subject: [Linuxarm] Re: [PATCH for-next 00/32] spin lock usage optimization for SCSI drivers
On Tue, 23 Feb 2021, Song Bao Hua (Barry Song) wrote:
Regarding m68k, your analysis overlooks the timing issue. E.g. patch 11/32 could be a problem because removing the irqsave would allow PDMA transfers to be interrupted. Aside from the timing issues, I agree with your analysis above regarding m68k.
You mentioned you need realtime so you want an interrupt to be able to preempt another one.
That's not what I said. But for the sake of discussion, yes, I do know people who run Linux on ARM hardware (if Android vendor kernels can be called "Linux") and who would benefit from realtime support on those devices.
Realtime requirement is definitely a true requirement on ARM Linux.
I once talked/worked with some guys who were using ARM for realtime system. The feasible approaches include: 1. Dual OS(RTOS + Linux): e.g. QNX+Linux XENOMAI+Linux L4+Linux 2. preempt-rt Which is continuously maintained like: https://lore.kernel.org/lkml/20210218201041.65fknr7bdplwqbez@linutronix.de/ 3. bootargs isolcpus= to isolate a cpu for a specific realtime task or interrupt https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_for_r... 4. ARM FIQ which has separate fiq API, an example in fsl sound: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/soun... 5. Let one core invisible to Linux Running non-os system and rtos on the core
Honestly, I've never seen anyone who depends on irq priority to support realtime in ARM Linux though ARM's RTOS-es use it quite commonly.
Now you said you want an interrupt not to be preempted as it will make a timing issue.
mac_esp deliberately constrains segment sizes so that it can harmlessly disable interrupts for the duration of the transfer.
Maybe the irqsave in this driver is over-cautious. Who knows? The PDMA timing problem relates to SCSI bus signalling and the tolerance of real- world SCSI devices to same. The other problem is that the PDMA logic circuit is undocumented hardware. So there may be further timing requirements lurking there. Therefore, patch 11/32 is too risky.
If this PDMA transfer will have some problem when it is preempted, I believe we need some enhanced ways to handle this, otherwise, once we enable preempt_rt or threaded_irq, it will get the timing issue. so here it needs a clear comment and IRQF_NO_THREAD if this is the case.
People who require fast response times cannot expect random drivers or platforms to meet such requirements. I fear you may be asking too much from Mac Quadra machines.
Once preempt_rt is enabled, those who want a fast irq environment need a no_thread flag, or need to set its irq thread to higher sched_fifo/rr priority.
With regard to other architectures and platforms, in specific cases, e.g. where there's never more than one IRQ involved, then I could agree that your assumptions probably hold and an irqsave would be probably redundant.
When you find a redundant irqsave, to actually patch it would bring a risk of regression with little or no reward. It's not my place to veto this entire patch series on that basis but IMO this kind of churn is misguided.
Nope.
I would say the real misguidance is that the code adds one lock while it doesn't need the lock. Easily we can add redundant locks or exaggerate the coverage range of locks, but the smarter way is that people add locks only when they really need the lock by considering concurrency and realtime performance.
You appear to be debating a strawman. No-one is advocating excessive locking in new code.
I actually meant most irqsave(s) in hardirq were added carelessly. When irq and threads could access same data, people added irqsave in threads, that is perfectly good as it could block irq. But people were likely to put an irqsave in irq without any thinking.
We do have some drivers which are doing that with a clear intention as your sonic_interrupt(), but I bet most were done aimlessly.
Anyway, the debate is long enough, let's move to some more important things. I appreciate that you shared a lot of knowledge of m68k.
Thanks Barry
On Wed, 24 Feb 2021, Song Bao Hua (Barry Song) wrote:
Realtime requirement is definitely a true requirement on ARM Linux.
I once talked/worked with some guys who were using ARM for realtime system. The feasible approaches include:
- Dual OS(RTOS + Linux): e.g. QNX+Linux XENOMAI+Linux L4+Linux
- preempt-rt
Which is continuously maintained like: https://lore.kernel.org/lkml/20210218201041.65fknr7bdplwqbez@linutronix.de/ 3. bootargs isolcpus= to isolate a cpu for a specific realtime task or interrupt https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_for_r... 4. ARM FIQ which has separate fiq API, an example in fsl sound: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/soun... 5. Let one core invisible to Linux Running non-os system and rtos on the core
Regarding Linux systems, it appears that approach 3 could theoretically achieve minimal interrupt latency for a given device without requiring any interrupt nesting. But the price is one CPU core which is not going to work on a uniprocessor system.
Honestly, I've never seen anyone who depends on irq priority to support realtime in ARM Linux though ARM's RTOS-es use it quite commonly.
Perhaps you don't work with uniprocessor ARM Linux systems?
Once preempt_rt is enabled, those who want a fast irq environment need a no_thread flag, or need to set its irq thread to higher sched_fifo/rr priority.
Thanks for the tip.
[...]
Anyway, the debate is long enough, let's move to some more important things. I appreciate that you shared a lot of knowledge of m68k.
No problem.
Thanks Barry
Hi Finn, Thanks for reviewing the patch set.
On 2021/2/8 15:57, Finn Thain wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Linux doesn't support nested interrupts anymore after the following patch, so please don't worry this.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
Were you able to measure any benefit from this change on some other platform?
It's hard to measure the benefit of this change. Hmm, you could take this patch set as cleanup. thanks.
Please see also, https://lore.kernel.org/linux-scsi/89c5cb05cb844939ae684db0077f675f@h3c.com/
.
On Tue, 9 Feb 2021, tanxiaofei wrote:
Hi Finn, Thanks for reviewing the patch set.
On 2021/2/8 15:57, Finn Thain wrote:
On Sun, 7 Feb 2021, Xiaofei Tan wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
This change doesn't necessarily work on platforms that support nested interrupts.
Linux doesn't support nested interrupts anymore after the following patch, so please don't worry this.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?i...
Clearly that patch did not disable interrupts. It removed a statement that enabled them.
Were you able to measure any benefit from this change on some other platform?
It's hard to measure the benefit of this change.
It's hard to see any benefit. But it's easy to see risk, when there's no indication that you've confirmed that the affected drivers do not rely on the irq lock, nor tested them for regressions, nor checked whether the affected platforms meet your assumuptions.
Hmm, you could take this patch set as cleanup. thanks.
A "cleanup" does not change program behaviour. Can you demonstrate that program behaviour is unchanged?
Please see also, https://lore.kernel.org/linux-scsi/89c5cb05cb844939ae684db0077f675f@h3c.com/
.
Hi Xiaofei,
On Sun, Feb 7, 2021 at 12:46 PM Xiaofei Tan tanxiaofei@huawei.com wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
I'll bite: how much does this speed up interrupt processing? What's the typical cost of saving/disabling, and restoring interrupt state? Is removing this cost worth the risk of introducing subtle regressions on platforms you cannot test yourself?
BTW, how many of these legacy SCSI controllers do you have access to?
Thanks for your answers!
Gr{oetje,eeting}s,
Geert
Hi Geert,
On 2021/2/24 17:41, Geert Uytterhoeven wrote:
Hi Xiaofei,
On Sun, Feb 7, 2021 at 12:46 PM Xiaofei Tan tanxiaofei@huawei.com wrote:
Replace spin_lock_irqsave with spin_lock in hard IRQ of SCSI drivers. There are no function changes, but may speed up if interrupt happen too often.
I'll bite: how much does this speed up interrupt processing? What's the typical cost of saving/disabling, and restoring interrupt state?
It could only take a few CPU cycles. So there is little benefit for speeding up interrupt processing.You could take them as cleanup.
Is removing this cost worth the risk of introducing subtle
regressions on platforms you cannot test yourself?
Currently, only found M68K platform support that high-priority interrupt preempts low-priority. No other platform has such services. Therefore, these changes do not affect non-M68K platforms.
For M68K platform, no one report such interrupt preemption case in these SCSI drivers.
BTW, how many of these legacy SCSI controllers do you have access to?
Actually, no.
Thanks for your answers!
Gr{oetje,eeting}s,
Geert