THis patch set support xfs atomic writes feature.
Alan Adamson (1): nvme: Atomic write support
Darrick J. Wong (3): fs: xfs: Introduce FORCEALIGN inode flag fs: xfs: Enable file data forcealign feature fs: xfs: Make file data allocations observe the 'forcealign' flag
John Garry (14): block: Pass blk_queue_get_max_sectors() a request pointer block: Add core atomic write support block: Add fops atomic write support fs: xfs: Do not free EOF blocks for forcealign fs: iomap: Sub-extent zeroing fs: xfs: iomap: Sub-extent zeroing fs: Add FS_XFLAG_ATOMICWRITES flag fs: iomap: Atomic write support fs: xfs: Support FS_XFLAG_ATOMICWRITES for forcealign fs: xfs: Support atomic write for statx fs: xfs: Validate atomic writes fs: xfs: Support setting FMODE_CAN_ATOMIC_WRITE xfs: Update xfs_is_falloc_aligned() mask for forcealign xfs: Only free full extents for forcealign
Long Li (4): xfs: support atomic write ioctl xfs: atomic write file dio convert to mark IOCB_ATOMIC xfs: fix set xflags fail when inode has extent hit xfs: make bunmapi observe forcealigin
Prasad Singamsetty (3): fs: Initial atomic write support fs: Add initial atomic write support info to statx block: Add atomic write support for statx
Zhang Yi (4): math64: add rem_u64() to just return the remainder iomap: pass blocksize to iomap_truncate_page() xfs: refactor the truncating order xfs: correct the truncate blocksize of realtime inode
Documentation/ABI/testing/sysfs-block | 52 +++++++++ block/blk-core.c | 23 +++- block/blk-merge.c | 95 +++++++++++++++- block/blk-settings.c | 94 +++++++++++++++ block/blk-sysfs.c | 33 ++++++ drivers/block/rnbd/rnbd-srv-dev.h | 3 +- drivers/nvme/host/core.c | 52 +++++++++ fs/aio.c | 8 +- fs/block_dev.c | 62 +++++++++- fs/iomap/buffered-io.c | 8 +- fs/iomap/direct-io.c | 33 ++++-- fs/read_write.c | 2 +- fs/stat.c | 46 ++++++++ fs/xfs/libxfs/xfs_bmap.c | 22 +++- fs/xfs/libxfs/xfs_format.h | 15 ++- fs/xfs/libxfs/xfs_inode_buf.c | 40 +++++++ fs/xfs/libxfs/xfs_inode_buf.h | 3 + fs/xfs/libxfs/xfs_sb.c | 4 + fs/xfs/xfs_bmap_util.c | 10 +- fs/xfs/xfs_file.c | 41 ++++++- fs/xfs/xfs_inode.c | 42 +++++++ fs/xfs/xfs_inode.h | 12 ++ fs/xfs/xfs_ioctl.c | 96 +++++++++++++++- fs/xfs/xfs_iomap.c | 19 +++- fs/xfs/xfs_iops.c | 158 ++++++++++++++++---------- fs/xfs/xfs_mount.h | 5 + fs/xfs/xfs_super.c | 9 ++ include/linux/blk_types.h | 7 ++ include/linux/blkdev.h | 82 ++++++++++++- include/linux/fs.h | 37 +++++- include/linux/iomap.h | 5 +- include/linux/math64.h | 24 ++++ include/linux/stat.h | 3 + include/uapi/linux/fs.h | 9 +- include/uapi/linux/stat.h | 10 +- io_uring/io_uring.c | 2 +- 36 files changed, 1064 insertions(+), 102 deletions(-)
反馈: 您发送到kernel@openeuler.org的补丁/补丁集,已成功转换为PR! PR链接地址: https://gitee.com/openeuler/kernel/pulls/8993 邮件列表地址:https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/H...
FeedBack: The patch(es) which you have sent to kernel@openeuler.org mailing list has been converted to a pull request successfully! Pull request link: https://gitee.com/openeuler/kernel/pulls/8993 Mailing list address: https://mailweb.openeuler.org/hyperkitty/list/kernel@openeuler.org/message/H...
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Currently blk_queue_get_max_sectors() is passed a enum req_op. In future the value returned from blk_queue_get_max_sectors() may depend on certain request flags, so pass a request pointer.
Reviewed-by: Christoph Hellwig hch@lst.de Reviewed-by: Keith Busch kbusch@kernel.org Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- block/blk-core.c | 2 +- drivers/block/rnbd/rnbd-srv-dev.h | 3 +-- include/linux/blkdev.h | 10 ++++++---- 3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c index f91f8e8be482..7950063c0345 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1155,7 +1155,7 @@ EXPORT_SYMBOL(submit_bio); static blk_status_t blk_cloned_rq_check_limits(struct request_queue *q, struct request *rq) { - unsigned int max_sectors = blk_queue_get_max_sectors(q, req_op(rq)); + unsigned int max_sectors = blk_queue_get_max_sectors(rq);
if (blk_rq_sectors(rq) > max_sectors) { /* diff --git a/drivers/block/rnbd/rnbd-srv-dev.h b/drivers/block/rnbd/rnbd-srv-dev.h index 0eb23850afb9..e41ce52ab8f1 100644 --- a/drivers/block/rnbd/rnbd-srv-dev.h +++ b/drivers/block/rnbd/rnbd-srv-dev.h @@ -66,8 +66,7 @@ static inline int rnbd_dev_get_max_discard_sects(const struct rnbd_dev *dev) if (!blk_queue_discard(bdev_get_queue(dev->bdev))) return 0;
- return blk_queue_get_max_sectors(bdev_get_queue(dev->bdev), - REQ_OP_DISCARD); + return bdev_get_queue(dev->bdev)->limits.max_discard_sectors; }
static inline int rnbd_dev_get_discard_granularity(const struct rnbd_dev *dev) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 50b4fd0a0687..2d6747877f7c 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1082,9 +1082,11 @@ static inline struct bio_vec req_bvec(struct request *rq) return mp_bvec_iter_bvec(rq->bio->bi_io_vec, rq->bio->bi_iter); }
-static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q, - int op) +static inline unsigned int blk_queue_get_max_sectors(struct request *rq) { + struct request_queue *q = rq->q; + int op = req_op(rq); + if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE)) return min(q->limits.max_discard_sectors, UINT_MAX >> SECTOR_SHIFT); @@ -1132,10 +1134,10 @@ static inline unsigned int blk_rq_get_max_sectors(struct request *rq, if (!q->limits.chunk_sectors || req_op(rq) == REQ_OP_DISCARD || req_op(rq) == REQ_OP_SECURE_ERASE) - return blk_queue_get_max_sectors(q, req_op(rq)); + return blk_queue_get_max_sectors(rq);
return min(blk_max_size_offset(q, offset, 0), - blk_queue_get_max_sectors(q, req_op(rq))); + blk_queue_get_max_sectors(rq)); }
static inline unsigned int blk_rq_count_bios(struct request *rq)
From: Prasad Singamsetty prasad.singamsetty@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-4-john.g.garry@oracle.com...
--------------------------------
An atomic write is a write issued with torn-write protection, meaning that for a power failure or any other hardware failure, all or none of the data from the write will be stored, but never a mix of old and new data.
Userspace may add flag RWF_ATOMIC to pwritev2() to indicate that the write is to be issued with torn-write prevention, according to special alignment and length rules.
For any syscall interface utilizing struct iocb, add IOCB_ATOMIC for iocb->ki_flags field to indicate the same.
A call to statx will give the relevant atomic write info for a file: - atomic_write_unit_min - atomic_write_unit_max - atomic_write_segments_max
Both min and max values must be a power-of-2.
Applications can avail of atomic write feature by ensuring that the total length of a write is a power-of-2 in size and also sized between atomic_write_unit_min and atomic_write_unit_max, inclusive. Applications must ensure that the write is at a naturally-aligned offset in the file wrt the total write length. The value in atomic_write_segments_max indicates the upper limit for IOV_ITER iovcnt.
Add file mode flag FMODE_CAN_ATOMIC_WRITE, so files which do not have the flag set will have RWF_ATOMIC rejected and not just ignored.
Add a type argument to kiocb_set_rw_flags() to allows reads which have RWF_ATOMIC set to be rejected.
Helper function generic_atomic_write_valid() can be used by FSes to verify compliant writes. There we check for iov_iter type is for ubuf, which implies iovcnt==1 for pwritev2(), which is an initial restriction for atomic_write_segments_max. Initially the only user will be bdev file operations write handler. We will rely on the block BIO submission path to ensure write sizes are compliant for the bdev, so we don't need to check atomic writes sizes yet.
Signed-off-by: Prasad Singamsetty prasad.singamsetty@oracle.com jpg: merge into single patch and much rewrite Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/aio.c | 8 ++++---- fs/read_write.c | 2 +- include/linux/fs.h | 33 ++++++++++++++++++++++++++++++++- include/uapi/linux/fs.h | 5 ++++- io_uring/io_uring.c | 2 +- 5 files changed, 42 insertions(+), 8 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c index 00641a1ad0b3..0c5b6eece498 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -1458,7 +1458,7 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2) iocb_put(iocb); }
-static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb) +static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb, int rw_type) { int ret;
@@ -1485,7 +1485,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb) } else req->ki_ioprio = get_current_ioprio();
- ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags); + ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags, rw_type); if (unlikely(ret)) return ret;
@@ -1537,7 +1537,7 @@ static int aio_read(struct kiocb *req, const struct iocb *iocb, struct file *file; int ret;
- ret = aio_prep_rw(req, iocb); + ret = aio_prep_rw(req, iocb, WRITE); if (ret) return ret; file = req->ki_filp; @@ -1565,7 +1565,7 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb, struct file *file; int ret;
- ret = aio_prep_rw(req, iocb); + ret = aio_prep_rw(req, iocb, READ); if (ret) return ret; file = req->ki_filp; diff --git a/fs/read_write.c b/fs/read_write.c index 371a5a76f30e..da03b3e65cf3 100644 --- a/fs/read_write.c +++ b/fs/read_write.c @@ -726,7 +726,7 @@ static ssize_t do_iter_readv_writev(struct file *filp, struct iov_iter *iter, ssize_t ret;
init_sync_kiocb(&kiocb, filp); - ret = kiocb_set_rw_flags(&kiocb, flags); + ret = kiocb_set_rw_flags(&kiocb, flags, type); if (ret) return ret; kiocb.ki_pos = (ppos ? *ppos : 0); diff --git a/include/linux/fs.h b/include/linux/fs.h index 382a0d4dd3dd..5e37ba855083 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -41,6 +41,7 @@ #include <linux/build_bug.h> #include <linux/stddef.h> #include <linux/mount.h> +#include <linux/uio.h> #include <linux/tracepoint-defs.h>
#include <asm/byteorder.h> @@ -184,6 +185,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset, /* File supports async buffered reads */ #define FMODE_BUF_RASYNC ((__force fmode_t)0x40000000)
+/* File supports atomic writes */ +#define FMODE_CAN_ATOMIC_WRITE ((__force fmode_t)0x80000000) + /* File mode control flag, expect random access pattern */ #define FMODE_CTL_RANDOM ((__force fmode_t)0x1000)
@@ -320,6 +324,7 @@ enum rw_hint { #define IOCB_SYNC (__force int) RWF_SYNC #define IOCB_NOWAIT (__force int) RWF_NOWAIT #define IOCB_APPEND (__force int) RWF_APPEND +#define IOCB_ATOMIC (__force int) RWF_ATOMIC
/* non-RWF related bits - start at 16 */ #define IOCB_EVENTFD (1 << 16) @@ -3406,7 +3411,8 @@ static inline int iocb_flags(struct file *file) return res; }
-static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags) +static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags, + int rw_type) { int kiocb_flags = 0;
@@ -3423,6 +3429,12 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags) return -EOPNOTSUPP; kiocb_flags |= IOCB_NOIO; } + if (flags & RWF_ATOMIC) { + if (rw_type != WRITE) + return -EOPNOTSUPP; + if (!(ki->ki_filp->f_mode & FMODE_CAN_ATOMIC_WRITE)) + return -EOPNOTSUPP; + } kiocb_flags |= (__force int) (flags & RWF_SUPPORTED); if (flags & RWF_SYNC) kiocb_flags |= IOCB_DSYNC; @@ -3665,4 +3677,23 @@ static inline void fs_file_read_do_trace(struct kiocb *iocb) if (tracepoint_enabled(fs_file_read)) fs_file_read_update_args_by_trace(iocb); } + +static inline +bool generic_atomic_write_valid(loff_t pos, struct iov_iter *iter, + unsigned int unit_min, unsigned int unit_max) +{ + size_t len = iov_iter_count(iter); + + if (len < unit_min || len > unit_max) + return false; + + if (!is_power_of_2(len)) + return false; + + if (!IS_ALIGNED(pos, len)) + return false; + + return true; +} + #endif /* _LINUX_FS_H */ diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index f44eb0a04afd..78f4091ed188 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -300,8 +300,11 @@ typedef int __bitwise __kernel_rwf_t; /* per-IO O_APPEND */ #define RWF_APPEND ((__force __kernel_rwf_t)0x00000010)
+/* Atomic Write */ +#define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040) + /* mask of flags supported by the kernel */ #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ - RWF_APPEND) + RWF_APPEND | RWF_ATOMIC)
#endif /* _UAPI_LINUX_FS_H */ diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 65cf70874fb3..c284e9865826 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2956,7 +2956,7 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe, kiocb->ki_pos = READ_ONCE(sqe->off); kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp)); kiocb->ki_flags = iocb_flags(kiocb->ki_filp); - ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags)); + ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags), rw); if (unlikely(ret)) return ret;
From: Prasad Singamsetty prasad.singamsetty@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Extend statx system call to return additional info for atomic write support support for a file.
Helper function generic_fill_statx_atomic_writes() can be used by FSes to fill in the relevant statx fields. For now atomic_write_segments_max will always be 1, otherwise some rules would need to be imposed on iovec length and alignment, which we don't want now.
Signed-off-by: Prasad Singamsetty prasad.singamsetty@oracle.com jpg: relocate bdev support to another patch Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/stat.c | 34 ++++++++++++++++++++++++++++++++++ include/linux/fs.h | 4 ++++ include/linux/stat.h | 3 +++ include/uapi/linux/stat.h | 10 +++++++++- 4 files changed, 50 insertions(+), 1 deletion(-)
diff --git a/fs/stat.c b/fs/stat.c index a8bf565886a8..f37e801fee98 100644 --- a/fs/stat.c +++ b/fs/stat.c @@ -51,6 +51,37 @@ void generic_fillattr(struct inode *inode, struct kstat *stat) } EXPORT_SYMBOL(generic_fillattr);
+/** + * generic_fill_statx_atomic_writes - Fill in atomic writes statx attributes + * @stat: Where to fill in the attribute flags + * @unit_min: Minimum supported atomic write length in bytes + * @unit_max: Maximum supported atomic write length in bytes + * + * Fill in the STATX{_ATTR}_WRITE_ATOMIC flags in the kstat structure from + * atomic write unit_min and unit_max values. + */ +void generic_fill_statx_atomic_writes(struct kstat *stat, + unsigned int unit_min, + unsigned int unit_max) +{ + /* Confirm that the request type is known */ + stat->result_mask |= STATX_WRITE_ATOMIC; + + /* Confirm that the file attribute type is known */ + stat->attributes_mask |= STATX_ATTR_WRITE_ATOMIC; + + if (unit_min) { + stat->atomic_write_unit_min = unit_min; + stat->atomic_write_unit_max = unit_max; + /* Initially only allow 1x segment */ + stat->atomic_write_segments_max = 1; + + /* Confirm atomic writes are actually supported */ + stat->attributes |= STATX_ATTR_WRITE_ATOMIC; + } +} +EXPORT_SYMBOL_GPL(generic_fill_statx_atomic_writes); + /** * vfs_getattr_nosec - getattr without security checks * @path: file to get attributes from @@ -569,6 +600,9 @@ cp_statx(const struct kstat *stat, struct statx __user *buffer) tmp.stx_dev_major = MAJOR(stat->dev); tmp.stx_dev_minor = MINOR(stat->dev); tmp.stx_mnt_id = stat->mnt_id; + tmp.stx_atomic_write_unit_min = stat->atomic_write_unit_min; + tmp.stx_atomic_write_unit_max = stat->atomic_write_unit_max; + tmp.stx_atomic_write_segments_max = stat->atomic_write_segments_max;
return copy_to_user(buffer, &tmp, sizeof(tmp)) ? -EFAULT : 0; } diff --git a/include/linux/fs.h b/include/linux/fs.h index 5e37ba855083..fac4c49f9365 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -3251,6 +3251,10 @@ extern int page_symlink(struct inode *inode, const char *symname, int len); extern const struct inode_operations page_symlink_inode_operations; extern void kfree_link(void *); extern void generic_fillattr(struct inode *, struct kstat *); +void generic_fill_statx_atomic_writes(struct kstat *stat, + unsigned int unit_min, + unsigned int unit_max); + extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int); extern int vfs_getattr_nosec(const struct path *, struct kstat *, u32, unsigned int); extern int vfs_getattr(const struct path *, struct kstat *, u32, unsigned int); void __inode_add_bytes(struct inode *inode, loff_t bytes); diff --git a/include/linux/stat.h b/include/linux/stat.h index fff27e603814..d266d2be6606 100644 --- a/include/linux/stat.h +++ b/include/linux/stat.h @@ -46,6 +46,9 @@ struct kstat { struct timespec64 btime; /* File creation time */ u64 blocks; u64 mnt_id; + u32 atomic_write_unit_min; + u32 atomic_write_unit_max; + u32 atomic_write_segments_max; };
#endif diff --git a/include/uapi/linux/stat.h b/include/uapi/linux/stat.h index 1500a0f58041..5ecb22f064ba 100644 --- a/include/uapi/linux/stat.h +++ b/include/uapi/linux/stat.h @@ -126,7 +126,12 @@ struct statx { __u64 stx_mnt_id; __u64 __spare2; /* 0xa0 */ - __u64 __spare3[12]; /* Spare space for future expansion */ + __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */ + __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */ + __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */ + __u32 __spare3[1]; + /* 0xb0 */ + __u64 __spare4[10]; /* Spare space for future expansion */ /* 0x100 */ };
@@ -152,6 +157,8 @@ struct statx { #define STATX_BASIC_STATS 0x000007ffU /* The stuff in the normal stat struct */ #define STATX_BTIME 0x00000800U /* Want/got stx_btime */ #define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */ +#define STATX_WRITE_ATOMIC 0x00008000U /* Want/got atomic_write_* fields */ +
#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */
@@ -187,6 +194,7 @@ struct statx { #define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */ #define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */ #define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */ +#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */
#endif /* _UAPI_LINUX_STAT_H */
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Add atomic write support, as follows: - add helper functions to get request_queue atomic write limits - report request_queue atomic write support limits to sysfs and update Doc - support to safely merge atomic writes - deal with splitting atomic writes - misc helper functions - add a per-request atomic write flag
New request_queue limits are added, as follows: - atomic_write_hw_max is set by the block driver and is the maximum length of an atomic write which the device may support. It is not necessarily a power-of-2. - atomic_write_max_sectors is derived from atomic_write_hw_max_sectors and max_hw_sectors. It is always a power-of-2. Atomic writes may be merged, and atomic_write_max_sectors would be the limit on a merged atomic write request size. This value is not capped at max_sectors, as the value in max_sectors can be controlled from userspace, and it would only cause trouble if userspace could limit atomic_write_unit_max_bytes and the other atomic write limits. - atomic_write_hw_unit_{min,max} are set by the block driver and are the min/max length of an atomic write unit which the device may support. They both must be a power-of-2. Typically atomic_write_hw_unit_max will hold the same value as atomic_write_hw_max. - atomic_write_unit_{min,max} are derived from atomic_write_hw_unit_{min,max}, max_hw_sectors, and block core limits. Both min and max values must be a power-of-2. - atomic_write_hw_boundary is set by the block driver. If non-zero, it indicates an LBA space boundary at which an atomic write straddles no longer is atomically executed by the disk. The value must be a power-of-2. Note that it would be acceptable to enforce a rule that atomic_write_hw_boundary_sectors is a multiple of atomic_write_hw_unit_max, but the resultant code would be more complicated.
All atomic writes limits are by default set 0 to indicate no atomic write support. Even though it is assumed by Linux that a logical block can always be atomically written, we ignore this as it is not of particular interest. Stacked devices are just not supported either for now.
An atomic write must always be submitted to the block driver as part of a single request. As such, only a single BIO must be submitted to the block layer for an atomic write. When a single atomic write BIO is submitted, it cannot be split. As such, atomic_write_unit_{max, min}_bytes are limited by the maximum guaranteed BIO size which will not be required to be split. This max size is calculated by request_queue max segments and the number of bvecs a BIO can fit, BIO_MAX_VECS. Currently we rely on userspace issuing a write with iovcnt=1 for pwritev2() - as such, we can rely on each segment containing PAGE_SIZE of data, apart from the first+last, which each can fit logical block size of data. The first+last will be LBS length/aligned as we rely on direct IO alignment rules also.
New sysfs files are added to report the following atomic write limits: - atomic_write_unit_max_bytes - same as atomic_write_unit_max_sectors in bytes - atomic_write_unit_min_bytes - same as atomic_write_unit_min_sectors in bytes - atomic_write_boundary_bytes - same as atomic_write_hw_boundary_sectors in bytes - atomic_write_max_bytes - same as atomic_write_max_sectors in bytes
Atomic writes may only be merged with other atomic writes and only under the following conditions: - total resultant request length <= atomic_write_max_bytes - the merged write does not straddle a boundary
Helper function bdev_can_atomic_write() is added to indicate whether atomic writes may be issued to a bdev. If a bdev is a partition, the partition start must be aligned with both atomic_write_unit_min_sectors and atomic_write_hw_boundary_sectors.
FSes will rely on the block layer to validate that an atomic write BIO submitted will be of valid size, so add blk_validate_atomic_write_op_size() for this purpose. Userspace expects an atomic write which is of invalid size to be rejected with -EINVAL, so add BLK_STS_INVAL for this. Also use BLK_STS_INVAL for when a BIO needs to be split, as this should mean an invalid size BIO.
Flag REQ_ATOMIC is used for indicating an atomic write.
Co-developed-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: Himanshu Madhani himanshu.madhani@oracle.com Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- Documentation/ABI/testing/sysfs-block | 52 +++++++++++++++ block/blk-core.c | 21 ++++++ block/blk-merge.c | 95 ++++++++++++++++++++++++++- block/blk-settings.c | 94 ++++++++++++++++++++++++++ block/blk-sysfs.c | 33 ++++++++++ include/linux/blk_types.h | 7 ++ include/linux/blkdev.h | 65 ++++++++++++++++++ 7 files changed, 366 insertions(+), 1 deletion(-)
diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block index e34cdeeeb9d4..6cee984819b3 100644 --- a/Documentation/ABI/testing/sysfs-block +++ b/Documentation/ABI/testing/sysfs-block @@ -97,6 +97,58 @@ Description: indicates how many bytes the beginning of the device is offset from the disk's natural alignment.
+What: /sys/block/<disk>/atomic_write_max_bytes +Date: February 2024 +Contact: Himanshu Madhani himanshu.madhani@oracle.com +Description: + [RO] This parameter specifies the maximum atomic write + size reported by the device. This parameter is relevant + for merging of writes, where a merged atomic write + operation must not exceed this number of bytes. + This parameter may be greater to the value in + atomic_write_unit_max_bytes as + atomic_write_unit_max_bytes will be rounded down to a + power-of-two and atomic_write_unit_max_bytes may also be + limited by some other queue limits, such as max_segments. + This parameter - along with atomic_write_unit_min_bytes + and atomic_write_unit_max_bytes - will not be larger than + max_hw_sectors_kb, but may be larger than max_sectors_kb. + + +What: /sys/block/<disk>/atomic_write_unit_min_bytes +Date: February 2024 +Contact: Himanshu Madhani himanshu.madhani@oracle.com +Description: + [RO] This parameter specifies the smallest block which can + be written atomically with an atomic write operation. All + atomic write operations must begin at a + atomic_write_unit_min boundary and must be multiples of + atomic_write_unit_min. This value must be a power-of-two. + + +What: /sys/block/<disk>/atomic_write_unit_max_bytes +Date: February 2024 +Contact: Himanshu Madhani himanshu.madhani@oracle.com +Description: + [RO] This parameter defines the largest block which can be + written atomically with an atomic write operation. This + value must be a multiple of atomic_write_unit_min and must + be a power-of-two. This value will not be larger than + atomic_write_max_bytes. + + +What: /sys/block/<disk>/atomic_write_boundary_bytes +Date: February 2024 +Contact: Himanshu Madhani himanshu.madhani@oracle.com +Description: + [RO] A device may need to internally split I/Os which + straddle a given logical block address boundary. In that + case a single atomic write operation will be processed as + one of more sub-operations which each complete atomically. + This parameter specifies the size in bytes of the atomic + boundary if one is reported by the device. This value must + be a power-of-two. + What: /sys/block/<disk>/<partition>/alignment_offset Date: April 2009 Contact: Martin K. Petersen martin.petersen@oracle.com diff --git a/block/blk-core.c b/block/blk-core.c index 7950063c0345..27f356c07d26 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -205,6 +205,8 @@ static const struct { [BLK_STS_ZONE_OPEN_RESOURCE] = { -ETOOMANYREFS, "open zones exceeded" }, [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW, "active zones exceeded" },
+ [BLK_STS_INVAL] = { -EINVAL, "invalid" }, + /* everything else not covered above: */ [BLK_STS_IOERR] = { -EIO, "I/O" }, }; @@ -819,6 +821,18 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q, return BLK_STS_OK; }
+static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q, + struct bio *bio) +{ + if (bio->bi_iter.bi_size > queue_atomic_write_unit_max_bytes(q)) + return BLK_STS_INVAL; + + if (bio->bi_iter.bi_size % queue_atomic_write_unit_min_bytes(q)) + return BLK_STS_INVAL; + + return BLK_STS_OK; +} + static noinline_for_stack bool submit_bio_checks(struct bio *bio) { struct request_queue *q = bio->bi_disk->queue; @@ -900,6 +914,13 @@ static noinline_for_stack bool submit_bio_checks(struct bio *bio) if (!q->limits.max_write_zeroes_sectors) goto not_supported; break; + case REQ_OP_WRITE: + if (bio->bi_opf & REQ_ATOMIC) { + status = blk_validate_atomic_write_op_size(q, bio); + if (status != BLK_STS_OK) + goto end_io; + } + break; default: break; } diff --git a/block/blk-merge.c b/block/blk-merge.c index a65d1d275040..7ca680a6c037 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -13,6 +13,46 @@ #include "blk.h" #include "blk-rq-qos.h"
+/* + * rq_straddles_atomic_write_boundary - check for boundary violation + * @rq: request to check + * @front: data size to be appended to front + * @back: data size to be appended to back + * + * Determine whether merging a request or bio into another request will result + * in a merged request which straddles an atomic write boundary. + * + * The value @front_adjust is the data which would be appended to the front of + * @rq, while the value @back_adjust is the data which would be appended to the + * back of @rq. Callers will typically only have either @front_adjust or + * @back_adjust as non-zero. + * + */ +static bool rq_straddles_atomic_write_boundary(struct request *rq, + unsigned int front_adjust, + unsigned int back_adjust) +{ + unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q); + u64 mask, start_rq_pos, end_rq_pos; + + if (!boundary) + return false; + + start_rq_pos = blk_rq_pos(rq) << SECTOR_SHIFT; + end_rq_pos = start_rq_pos + blk_rq_bytes(rq) - 1; + + start_rq_pos -= front_adjust; + end_rq_pos += back_adjust; + + mask = ~(boundary - 1); + + /* Top bits are different, so crossed a boundary */ + if ((start_rq_pos & mask) != (end_rq_pos & mask)) + return true; + + return false; +} + static inline bool bio_will_gap(struct request_queue *q, struct request *prev_rq, struct bio *prev, struct bio *next) { @@ -145,11 +185,20 @@ static inline unsigned get_max_io_size(struct request_queue *q, struct bio *bio) { unsigned sectors = blk_max_size_offset(q, bio->bi_iter.bi_sector, 0); - unsigned max_sectors = sectors; + unsigned max_sectors; unsigned pbs = queue_physical_block_size(q) >> SECTOR_SHIFT; unsigned lbs = queue_logical_block_size(q) >> SECTOR_SHIFT; unsigned start_offset = bio->bi_iter.bi_sector & (pbs - 1);
+ /* + * We ignore lim->max_sectors for atomic writes simply because + * it may less than the bio size, which we cannot tolerate. + */ + if (bio->bi_opf & REQ_ATOMIC) + max_sectors = q->limits.atomic_write_max_sectors; + else + max_sectors = sectors; + max_sectors += start_offset; max_sectors &= ~(pbs - 1); if (max_sectors > start_offset) @@ -278,6 +327,11 @@ static struct bio *blk_bio_segment_split(struct request_queue *q, *segs = nsegs; return NULL; split: + if (bio->bi_opf & REQ_ATOMIC) { + bio->bi_status = BLK_STS_INVAL; + bio_endio(bio); + return ERR_PTR(-EINVAL); + } *segs = nsegs; return bio_split(bio, sectors, GFP_NOIO, bs); } @@ -594,6 +648,13 @@ int ll_back_merge_fn(struct request *req, struct bio *bio, unsigned int nr_segs) return 0; }
+ if (req->cmd_flags & REQ_ATOMIC) { + if (rq_straddles_atomic_write_boundary(req, + bio->bi_iter.bi_size, 0)) { + return 0; + } + } + return ll_new_hw_segment(req, bio, nr_segs); }
@@ -613,6 +674,13 @@ static int ll_front_merge_fn(struct request *req, struct bio *bio, return 0; }
+ if (req->cmd_flags & REQ_ATOMIC) { + if (rq_straddles_atomic_write_boundary(req, + 0, bio->bi_iter.bi_size)) { + return 0; + } + } + return ll_new_hw_segment(req, bio, nr_segs); }
@@ -649,6 +717,13 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, blk_rq_get_max_sectors(req, blk_rq_pos(req))) return 0;
+ if (req->cmd_flags & REQ_ATOMIC) { + if (rq_straddles_atomic_write_boundary(req, + 0, blk_rq_bytes(next))) { + return 0; + } + } + total_phys_segments = req->nr_phys_segments + next->nr_phys_segments; if (total_phys_segments > blk_rq_get_max_segments(req)) return 0; @@ -721,6 +796,18 @@ static enum elv_merge blk_try_req_merge(struct request *req, return ELEVATOR_NO_MERGE; }
+static bool blk_atomic_write_mergeable_rq_bio(struct request *rq, + struct bio *bio) +{ + return (rq->cmd_flags & REQ_ATOMIC) == (bio->bi_opf & REQ_ATOMIC); +} + +static bool blk_atomic_write_mergeable_rqs(struct request *rq, + struct request *next) +{ + return (rq->cmd_flags & REQ_ATOMIC) == (next->cmd_flags & REQ_ATOMIC); +} + /* * For non-mq, this has to be called with the request spinlock acquired. * For mq with scheduling, the appropriate queue wide lock should be held. @@ -752,6 +839,9 @@ static struct request *attempt_merge(struct request_queue *q, if (req->ioprio != next->ioprio) return NULL;
+ if (!blk_atomic_write_mergeable_rqs(req, next)) + return NULL; + /* * If we are allowed to merge, then append bio list * from next to rq and release next. merge_requests_fn @@ -895,6 +985,9 @@ bool blk_rq_merge_ok(struct request *rq, struct bio *bio) if (rq->ioprio != bio_prio(bio)) return false;
+ if (blk_atomic_write_mergeable_rq_bio(rq, bio) == false) + return false; + return true; }
diff --git a/block/blk-settings.c b/block/blk-settings.c index c3aa7f8ee388..754b758946a9 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -127,6 +127,41 @@ void blk_queue_bounce_limit(struct request_queue *q, u64 max_addr) } EXPORT_SYMBOL(blk_queue_bounce_limit);
+/* + * Returns max guaranteed bytes which we can fit in a bio. + * + * We always assume that we can fit in at least PAGE_SIZE in a segment, apart + * from first and last segments. + */ +static +unsigned int blk_queue_max_guaranteed_bio(struct queue_limits *limits) +{ + unsigned int max_segments = min((u16)BIO_MAX_PAGES, limits->max_segments); + unsigned int length; + + length = min(max_segments, 2U) * limits->logical_block_size; + if (max_segments > 2) + length += (max_segments - 2) * PAGE_SIZE; + + return length; +} + +static void blk_atomic_writes_update_limits(struct queue_limits *limits) +{ + unsigned int unit_limit = min(limits->max_hw_sectors << SECTOR_SHIFT, + blk_queue_max_guaranteed_bio(limits)); + + unit_limit = rounddown_pow_of_two(unit_limit); + + limits->atomic_write_max_sectors = + min(limits->atomic_write_hw_max >> SECTOR_SHIFT, + limits->max_hw_sectors); + limits->atomic_write_unit_min = + min(limits->atomic_write_hw_unit_min, unit_limit); + limits->atomic_write_unit_max = + min(limits->atomic_write_hw_unit_max, unit_limit); +} + /** * blk_queue_max_hw_sectors - set max sectors for a request for this queue * @q: the request queue for the device @@ -161,6 +196,9 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors); max_sectors = min_t(unsigned int, max_sectors, BLK_DEF_MAX_SECTORS); limits->max_sectors = max_sectors; + + blk_atomic_writes_update_limits(limits); + q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9); } EXPORT_SYMBOL(blk_queue_max_hw_sectors); @@ -196,6 +234,62 @@ void blk_queue_max_discard_sectors(struct request_queue *q, } EXPORT_SYMBOL(blk_queue_max_discard_sectors);
+/** + * blk_queue_atomic_write_max_bytes - set max bytes supported by + * the device for atomic write operations. + * @q: the request queue for the device + * @bytes: maximum bytes supported + */ +void blk_queue_atomic_write_max_bytes(struct request_queue *q, + unsigned int bytes) +{ + q->limits.atomic_write_hw_max = bytes; + blk_atomic_writes_update_limits(&q->limits); +} +EXPORT_SYMBOL(blk_queue_atomic_write_max_bytes); + +/** + * blk_queue_atomic_write_boundary_bytes - Device's logical block address space + * which an atomic write should not cross. + * @q: the request queue for the device + * @bytes: must be a power-of-two. + */ +void blk_queue_atomic_write_boundary_bytes(struct request_queue *q, + unsigned int bytes) +{ + q->limits.atomic_write_hw_boundary = bytes; +} +EXPORT_SYMBOL(blk_queue_atomic_write_boundary_bytes); + +/** + * blk_queue_atomic_write_unit_min_bytes - smallest unit that can be written + * atomically to the device. + * @q: the request queue for the device + * @bytes: must be a power-of-two. + */ +void blk_queue_atomic_write_unit_min_bytes(struct request_queue *q, + unsigned int bytes) +{ + q->limits.atomic_write_hw_unit_min = bytes; + blk_atomic_writes_update_limits(&q->limits); +} +EXPORT_SYMBOL(blk_queue_atomic_write_unit_min_bytes); + +/* + * blk_queue_atomic_write_unit_max_bytes - largest unit that can be written + * atomically to the device. + * @q: the request queue for the device + * @bytes: must be a power-of-two. + */ +void blk_queue_atomic_write_unit_max_bytes(struct request_queue *q, + unsigned int bytes) +{ + q->limits.atomic_write_hw_unit_max = bytes; + blk_atomic_writes_update_limits(&q->limits); +} +EXPORT_SYMBOL(blk_queue_atomic_write_unit_max_bytes); + + /** * blk_queue_max_write_same_sectors - set max sectors for a single write same * @q: the request queue for the device diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c index c95be9626a09..6916f1ed4590 100644 --- a/block/blk-sysfs.c +++ b/block/blk-sysfs.c @@ -126,6 +126,30 @@ static ssize_t queue_max_discard_segments_show(struct request_queue *q, return queue_var_show(queue_max_discard_segments(q), (page)); }
+static ssize_t queue_atomic_write_max_bytes_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_atomic_write_max_bytes(q), page); +} + +static ssize_t queue_atomic_write_boundary_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_atomic_write_boundary_bytes(q), page); +} + +static ssize_t queue_atomic_write_unit_min_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_atomic_write_unit_min_bytes(q), page); +} + +static ssize_t queue_atomic_write_unit_max_show(struct request_queue *q, + char *page) +{ + return queue_var_show(queue_atomic_write_unit_max_bytes(q), page); +} + static ssize_t queue_max_integrity_segments_show(struct request_queue *q, char *page) { return queue_var_show(q->limits.max_integrity_segments, (page)); @@ -585,6 +609,11 @@ QUEUE_RO_ENTRY(queue_discard_max_hw, "discard_max_hw_bytes"); QUEUE_RW_ENTRY(queue_discard_max, "discard_max_bytes"); QUEUE_RO_ENTRY(queue_discard_zeroes_data, "discard_zeroes_data");
+QUEUE_RO_ENTRY(queue_atomic_write_max_bytes, "atomic_write_max_bytes"); +QUEUE_RO_ENTRY(queue_atomic_write_boundary, "atomic_write_boundary_bytes"); +QUEUE_RO_ENTRY(queue_atomic_write_unit_max, "atomic_write_unit_max_bytes"); +QUEUE_RO_ENTRY(queue_atomic_write_unit_min, "atomic_write_unit_min_bytes"); + QUEUE_RO_ENTRY(queue_write_same_max, "write_same_max_bytes"); QUEUE_RO_ENTRY(queue_write_zeroes_max, "write_zeroes_max_bytes"); QUEUE_RO_ENTRY(queue_zone_append_max, "zone_append_max_bytes"); @@ -639,6 +668,10 @@ static struct attribute *queue_attrs[] = { &queue_discard_max_entry.attr, &queue_discard_max_hw_entry.attr, &queue_discard_zeroes_data_entry.attr, + &queue_atomic_write_max_bytes_entry.attr, + &queue_atomic_write_boundary_entry.attr, + &queue_atomic_write_unit_min_entry.attr, + &queue_atomic_write_unit_max_entry.attr, &queue_write_same_max_entry.attr, &queue_write_zeroes_max_entry.attr, &queue_zone_append_max_entry.attr, diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 1853ec569b72..996e03d50f41 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -133,6 +133,11 @@ typedef u8 __bitwise blk_status_t; */ #define BLK_STS_ZONE_ACTIVE_RESOURCE ((__force blk_status_t)16)
+/* + * Invalid size or alignment. + */ +#define BLK_STS_INVAL ((__force blk_status_t)19) + /** * blk_path_error - returns true if error may be path related * @error: status the request was completed with @@ -421,6 +426,7 @@ enum req_flag_bits { /* for driver use */ __REQ_DRV, __REQ_SWAP, /* swapping request. */ + __REQ_ATOMIC, /* for atomic write operations */ __REQ_NR_BITS, /* stops here */ };
@@ -445,6 +451,7 @@ enum req_flag_bits {
#define REQ_DRV (1ULL << __REQ_DRV) #define REQ_SWAP (1ULL << __REQ_SWAP) +#define REQ_ATOMIC (1ULL << __REQ_ATOMIC)
#define REQ_FAILFAST_MASK \ (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 2d6747877f7c..a53e70a6aeec 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -346,6 +346,16 @@ struct queue_limits { unsigned int discard_granularity; unsigned int discard_alignment;
+ /* atomic write limits */ + unsigned int atomic_write_hw_max; + unsigned int atomic_write_max_sectors; + unsigned int atomic_write_hw_boundary; + unsigned int atomic_write_hw_unit_min; + unsigned int atomic_write_unit_min; + unsigned int atomic_write_hw_unit_max; + unsigned int atomic_write_unit_max; + + unsigned short max_segments; unsigned short max_integrity_segments; unsigned short max_discard_segments; @@ -1097,6 +1107,9 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq) if (unlikely(op == REQ_OP_WRITE_ZEROES)) return q->limits.max_write_zeroes_sectors;
+ if (rq->cmd_flags & REQ_ATOMIC) + return q->limits.atomic_write_max_sectors; + return q->limits.max_sectors; }
@@ -1187,6 +1200,14 @@ extern void blk_queue_max_zone_append_sectors(struct request_queue *q, extern void blk_queue_physical_block_size(struct request_queue *, unsigned int); extern void blk_queue_alignment_offset(struct request_queue *q, unsigned int alignment); +void blk_queue_atomic_write_max_bytes(struct request_queue *q, + unsigned int bytes); +void blk_queue_atomic_write_boundary_bytes(struct request_queue *q, + unsigned int bytes); +void blk_queue_atomic_write_unit_max_bytes(struct request_queue *q, + unsigned int bytes); +void blk_queue_atomic_write_unit_min_bytes(struct request_queue *q, + unsigned int bytes); void blk_queue_update_readahead(struct request_queue *q); extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min); extern void blk_queue_io_min(struct request_queue *q, unsigned int min); @@ -1649,6 +1670,30 @@ static inline unsigned int bdev_max_active_zones(struct block_device *bdev) return 0; }
+static inline unsigned int +queue_atomic_write_unit_max_bytes(const struct request_queue *q) +{ + return q->limits.atomic_write_unit_max; +} + +static inline unsigned int +queue_atomic_write_unit_min_bytes(const struct request_queue *q) +{ + return q->limits.atomic_write_unit_min; +} + +static inline unsigned int +queue_atomic_write_boundary_bytes(const struct request_queue *q) +{ + return q->limits.atomic_write_hw_boundary; +} + +static inline unsigned int +queue_atomic_write_max_bytes(const struct request_queue *q) +{ + return q->limits.atomic_write_max_sectors << SECTOR_SHIFT; +} + static inline int queue_dma_alignment(const struct request_queue *q) { return q ? q->dma_alignment : 511; @@ -2102,4 +2147,24 @@ int fsync_bdev(struct block_device *bdev); struct super_block *freeze_bdev(struct block_device *bdev); int thaw_bdev(struct block_device *bdev, struct super_block *sb);
+static inline bool bdev_can_atomic_write(struct block_device *bdev) +{ + struct request_queue *bd_queue = bdev_get_queue(bdev); + struct queue_limits *limits = &bd_queue->limits; + + if (!limits->atomic_write_unit_min) + return false; + + if (bdev_is_partition(bdev)) { + sector_t bd_start_sect = bdev->bd_part->start_sect; + unsigned int alignment = + max(limits->atomic_write_unit_min, + limits->atomic_write_hw_boundary); + if (!IS_ALIGNED(bd_start_sect, alignment)) + return false; + } + + return true; +} + #endif /* _LINUX_BLKDEV_H */
From: Prasad Singamsetty prasad.singamsetty@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Extend statx system call to return additional info for atomic write support support if the specified file is a block device.
Signed-off-by: Prasad Singamsetty prasad.singamsetty@oracle.com Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/block_dev.c | 32 ++++++++++++++++++++++++++++++++ fs/stat.c | 12 ++++++++++++ include/linux/blkdev.h | 7 +++++++ 3 files changed, 51 insertions(+)
diff --git a/fs/block_dev.c b/fs/block_dev.c index b9104fc0a395..749e36a49796 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -560,6 +560,38 @@ int sync_blockdev(struct block_device *bdev) } EXPORT_SYMBOL(sync_blockdev);
+/* + * Handle STATX_WRITE_ATOMIC for block devices. + */ +void bdev_statx(struct inode *backing_inode, struct kstat *stat, + u32 request_mask) +{ + struct block_device *bdev; + + if (!(request_mask & STATX_WRITE_ATOMIC)) + return; + + /* + * Note that backing_inode is the inode of a block device node file, + * not the block device's internal inode. Therefore it is *not* valid + * to use I_BDEV() here; the block device has to be looked up by i_rdev + * instead. + */ + bdev = blkdev_get_by_dev(backing_inode->i_rdev, FMODE_READ, NULL); + if (!bdev) + return; + + if (request_mask & STATX_WRITE_ATOMIC && bdev_can_atomic_write(bdev)) { + struct request_queue *bd_queue = bdev_get_queue(bdev);; + + generic_fill_statx_atomic_writes(stat, + queue_atomic_write_unit_min_bytes(bd_queue), + queue_atomic_write_unit_max_bytes(bd_queue)); + } + + blkdev_put(bdev, FMODE_READ); +} + /* * Write out and wait upon all dirty data associated with this * device. Filesystem data as well as the underlying block diff --git a/fs/stat.c b/fs/stat.c index f37e801fee98..1539882b483e 100644 --- a/fs/stat.c +++ b/fs/stat.c @@ -17,6 +17,7 @@ #include <linux/syscalls.h> #include <linux/pagemap.h> #include <linux/compat.h> +#include <linux/blkdev.h>
#include <linux/uaccess.h> #include <asm/unistd.h> @@ -207,6 +208,7 @@ static int vfs_statx(int dfd, const char __user *filename, int flags, { struct path path; unsigned lookup_flags = 0; + struct inode *backing_inode; int error;
if (flags & ~(AT_SYMLINK_NOFOLLOW | AT_NO_AUTOMOUNT | AT_EMPTY_PATH | @@ -231,6 +233,16 @@ static int vfs_statx(int dfd, const char __user *filename, int flags, if (path.mnt->mnt_root == path.dentry) stat->attributes |= STATX_ATTR_MOUNT_ROOT; stat->attributes_mask |= STATX_ATTR_MOUNT_ROOT; + + /* + * If this is a block device inode, override the filesystem + * attributes with the block device specific parameters that need to be + * obtained from the bdev backing inode. + */ + backing_inode = d_backing_inode(path.dentry); + if (S_ISBLK(backing_inode->i_mode)) + bdev_statx(backing_inode, stat, request_mask); + path_put(&path); if (retry_estale(error, lookup_flags)) { lookup_flags |= LOOKUP_REVAL; diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index a53e70a6aeec..f0027ea2905d 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -2128,6 +2128,8 @@ void invalidate_bdev(struct block_device *bdev); int truncate_bdev_range(struct block_device *bdev, fmode_t mode, loff_t lstart, loff_t lend); int sync_blockdev(struct block_device *bdev); +void bdev_statx(struct inode *backing_inode, struct kstat *stat, + u32 request_mask); #else static inline void invalidate_bdev(struct block_device *bdev) { @@ -2141,6 +2143,11 @@ static inline int sync_blockdev(struct block_device *bdev) { return 0; } + +static inline void bdev_statx(struct inode *backing_inode, struct kstat *stat, + u32 request_mask) +{ +} #endif int fsync_bdev(struct block_device *bdev);
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Support atomic writes by submitting a single BIO with the REQ_ATOMIC set.
It must be ensured that the atomic write adheres to its rules, like naturally aligned offset, so call blkdev_dio_invalid() -> blkdev_atomic_write_valid() [with renaming blkdev_dio_unaligned() to blkdev_dio_invalid()] for this purpose. The BIO submission path currently checks for atomic writes which are too large, so no need to check here.
In blkdev_direct_IO(), if the nr_pages exceeds BIO_MAX_VECS, then we cannot produce a single BIO, so error in this case.
Finally set FMODE_CAN_ATOMIC_WRITE when the bdev can support atomic writes and the associated file flag is for O_DIRECT.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/block_dev.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c index 749e36a49796..404c10c849a7 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -272,6 +272,9 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, bio.bi_end_io = blkdev_bio_end_io_simple; bio.bi_ioprio = iocb->ki_ioprio;
+ if (iocb->ki_flags & IOCB_ATOMIC) + bio.bi_opf |= REQ_ATOMIC; + ret = bio_iov_iter_get_pages(&bio, iter); if (unlikely(ret)) goto out; @@ -452,6 +455,10 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) bio->bi_opf = dio_bio_write_op(iocb); task_io_account_write(bio->bi_iter.bi_size); } + + if (iocb->ki_flags & IOCB_ATOMIC) + bio->bi_opf |= REQ_ATOMIC; + if (iocb->ki_flags & IOCB_NOWAIT) bio->bi_opf |= REQ_NOWAIT;
@@ -521,14 +528,30 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) return ret; }
+static bool blkdev_atomic_write_valid(struct block_device *bdev, loff_t pos, + struct iov_iter *iter) +{ + struct request_queue *q = bdev_get_queue(bdev); + unsigned int min_bytes = queue_atomic_write_unit_min_bytes(q); + unsigned int max_bytes = queue_atomic_write_unit_max_bytes(q); + + return generic_atomic_write_valid(pos, iter, min_bytes, max_bytes); +} + static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter) { int nr_pages; + bool is_atomic = iocb->ki_flags & IOCB_ATOMIC; + struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES + 1); if (!nr_pages) return 0; + + if (is_atomic && !blkdev_atomic_write_valid(bdev, iocb->ki_pos, iter)) + return -EINVAL; + if (is_sync_kiocb(iocb) && nr_pages <= BIO_MAX_PAGES) return __blkdev_direct_IO_simple(iocb, iter, nr_pages);
@@ -1895,6 +1918,7 @@ EXPORT_SYMBOL(blkdev_get_by_dev); static int blkdev_open(struct inode * inode, struct file * filp) { struct block_device *bdev; + int err;
/* * Preserve backwards compatibility and allow large file access @@ -1920,7 +1944,11 @@ static int blkdev_open(struct inode * inode, struct file * filp) filp->f_mapping = bdev->bd_inode->i_mapping; filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping);
- return blkdev_get(bdev, filp->f_mode, filp); + err = blkdev_get(bdev, filp->f_mode, filp); + if (!err && bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT) + filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; + + return err; }
static void __blkdev_put(struct block_device *bdev, fmode_t mode, int for_part)
From: Alan Adamson alan.adamson@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Add support to set block layer request_queue atomic write limits. The limits will be derived from either the namespace or controller atomic parameters.
NVMe atomic-related parameters are grouped into "normal" and "power-fail" (or PF) class of parameter. For atomic write support, only PF parameters are of interest. The "normal" parameters are concerned with racing reads and writes (which also applies to PF). See NVM Command Set Specification Revision 1.0d section 2.1.4 for reference.
Whether to use per namespace or controller atomic parameters is decided by NSFEAT bit 1 - see Figure 97: Identify – Identify Namespace Data Structure, NVM Command Set.
NVMe namespaces may define an atomic boundary, whereby no atomic guarantees are provided for a write which straddles this per-lba space boundary. The block layer merging policy is such that no merges may occur in which the resultant request would straddle such a boundary.
Unlike SCSI, NVMe specifies no granularity or alignment rules, apart from atomic boundary rule. In addition, again unlike SCSI, there is no dedicated atomic write command - a write which adheres to the atomic size limit and boundary is implicitly atomic.
If NSFEAT bit 1 is set, the following parameters are of interest: - NAWUPF (Namespace Atomic Write Unit Power Fail) - NABSPF (Namespace Atomic Boundary Size Power Fail) - NABO (Namespace Atomic Boundary Offset)
and we set request_queue limits as follows: - atomic_write_unit_max = rounddown_pow_of_two(NAWUPF) - atomic_write_max_bytes = NAWUPF - atomic_write_boundary = NABSPF
If in the unlikely scenario that NABO is non-zero, then atomic writes will not be supported at all as dealing with this adds extra complexity. This policy may change in future.
In all cases, atomic_write_unit_min is set to the logical block size.
If NSFEAT bit 1 is unset, the following parameter is of interest: - AWUPF (Atomic Write Unit Power Fail)
and we set request_queue limits as follows: - atomic_write_unit_max = rounddown_pow_of_two(AWUPF) - atomic_write_max_bytes = AWUPF - atomic_write_boundary = 0
A new function, nvme_valid_atomic_write(), is also called from submission path to verify that a request has been submitted to the driver will actually be executed atomically. As mentioned, there is no dedicated NVMe atomic write command (which may error for a command which exceeds the controller atomic write limits).
Note on NABSPF: There seems to be some vagueness in the spec as to whether NABSPF applies for NSFEAT bit 1 being unset. Figure 97 does not explicitly mention NABSPF and how it is affected by bit 1. However Figure 4 does tell to check Figure 97 for info about per-namespace parameters, which NABSPF is, so it is implied. However currently nvme_update_disk_info() does check namespace parameter NABO regardless of this bit.
Signed-off-by: Alan Adamson alan.adamson@oracle.com Reviewed-by: Keith Busch kbusch@kernel.org jpg: total rewrite Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com
nvme: update nvme atomic info in correct disk Signed-off-by: Long Li leo.lilong@huawei.com --- drivers/nvme/host/core.c | 52 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 9fcc05c4f88c..4fa0c55787d2 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -752,6 +752,30 @@ static inline blk_status_t nvme_setup_write_zeroes(struct nvme_ns *ns, return BLK_STS_OK; }
+static bool nvme_valid_atomic_write(struct request *req) +{ + struct request_queue *q = req->q; + u32 boundary_bytes = queue_atomic_write_boundary_bytes(q); + + if (blk_rq_bytes(req) > queue_atomic_write_unit_max_bytes(q)) + return false; + + if (boundary_bytes) { + u64 mask = boundary_bytes - 1, imask = ~mask; + u64 start = blk_rq_pos(req) << SECTOR_SHIFT; + u64 end = start + blk_rq_bytes(req) - 1; + + /* If greater then must be crossing a boundary */ + if (blk_rq_bytes(req) > boundary_bytes) + return false; + + if ((start & imask) != (end & imask)) + return false; + } + + return true; +} + static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, struct request *req, struct nvme_command *cmnd, enum nvme_opcode op) @@ -768,6 +792,13 @@ static inline blk_status_t nvme_setup_rw(struct nvme_ns *ns, if (req->cmd_flags & REQ_RAHEAD) dsmgmt |= NVME_RW_DSM_FREQ_PREFETCH;
+ /* + * Ensure that nothing has been sent which cannot be executed + * atomically. + */ + if (req->cmd_flags & REQ_ATOMIC && !nvme_valid_atomic_write(req)) + return BLK_STS_INVAL; + cmnd->rw.opcode = op; cmnd->rw.nsid = cpu_to_le32(ns->head->ns_id); cmnd->rw.slba = cpu_to_le64(nvme_sect_to_lba(ns, blk_rq_pos(req))); @@ -2011,6 +2042,22 @@ static int nvme_configure_metadata(struct nvme_ns *ns, struct nvme_id_ns *id) return 0; }
+static void nvme_update_atomic_write_disk_info(struct nvme_ns *ns, + struct nvme_id_ns *id, struct queue_limits *lim, + u32 bs, u32 atomic_bs) +{ + unsigned int boundary = 0; + + if (id->nsfeat & NVME_NS_FEAT_ATOMICS && id->nawupf) { + if (le16_to_cpu(id->nabspf)) + boundary = (le16_to_cpu(id->nabspf) + 1) * bs; + } + lim->atomic_write_hw_max = atomic_bs; + lim->atomic_write_hw_boundary = boundary; + lim->atomic_write_hw_unit_min = bs; + lim->atomic_write_hw_unit_max = rounddown_pow_of_two(atomic_bs); +} + static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, struct request_queue *q) { @@ -2060,6 +2107,9 @@ static void nvme_update_disk_info(struct gendisk *disk, atomic_bs = (1 + le16_to_cpu(id->nawupf)) * bs; else atomic_bs = (1 + ns->ctrl->subsys->awupf) * bs; + + nvme_update_atomic_write_disk_info(ns, id, &disk->queue->limits, + bs, atomic_bs); }
if (id->nsfeat & NVME_NS_FEAT_IO_OPT) { @@ -2160,6 +2210,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id) goto out_unfreeze; nvme_set_chunk_sectors(ns, id); nvme_update_disk_info(ns->disk, ns, id); + nvme_set_queue_limits(ns->ctrl, ns->queue); blk_mq_unfreeze_queue(ns->disk->queue);
if (blk_queue_is_zoned(ns->queue)) { @@ -2172,6 +2223,7 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_id_ns *id) if (ns->head->disk) { blk_mq_freeze_queue(ns->head->disk->queue); nvme_update_disk_info(ns->head->disk, ns, id); + nvme_set_queue_limits(ns->ctrl, ns->head->disk->queue); blk_stack_limits(&ns->head->disk->queue->limits, &ns->queue->limits, 0); blk_queue_update_readahead(ns->head->disk->queue);
From: "Darrick J. Wong" djwong@kernel.org
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Add a new inode flag to require that all file data extent mappings must be aligned (both the file offset range and the allocated space itself) to the extent size hint. Having a separate COW extent size hint is no longer allowed.
The goal here is to enable sysadmins and users to mandate that all space mappings in a file must have a startoff/blockcount that are aligned to (say) a 2MB alignment and that the startblock/blockcount will follow the same alignment.
jpg: Enforce extsize is a power-of-2 for forcealign Signed-off-by: "Darrick J. Wong" djwong@kernel.org Co-developed-by: John Garry john.g.garry@oracle.com Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/libxfs/xfs_format.h | 6 +++++- fs/xfs/libxfs/xfs_inode_buf.c | 40 +++++++++++++++++++++++++++++++++++ fs/xfs/libxfs/xfs_inode_buf.h | 3 +++ fs/xfs/libxfs/xfs_sb.c | 2 ++ fs/xfs/xfs_inode.c | 13 ++++++++++++ fs/xfs/xfs_inode.h | 4 ++++ fs/xfs/xfs_ioctl.c | 32 ++++++++++++++++++++++++++++ fs/xfs/xfs_mount.h | 2 ++ fs/xfs/xfs_super.c | 5 +++++ include/uapi/linux/fs.h | 2 ++ 10 files changed, 108 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_format.h b/fs/xfs/libxfs/xfs_format.h index 54832df8540f..fb537bb7dba5 100644 --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -353,6 +353,7 @@ xfs_sb_has_compat_feature( #define XFS_SB_FEAT_RO_COMPAT_RMAPBT (1 << 1) /* reverse map btree */ #define XFS_SB_FEAT_RO_COMPAT_REFLINK (1 << 2) /* reflinked files */ #define XFS_SB_FEAT_RO_COMPAT_INOBTCNT (1 << 3) /* inobt block counts */ +#define XFS_SB_FEAT_RO_COMPAT_FORCEALIGN (1 << 30) /* aligned file data extents */ #define XFS_SB_FEAT_RO_COMPAT_ALL \ (XFS_SB_FEAT_RO_COMPAT_FINOBT | \ XFS_SB_FEAT_RO_COMPAT_RMAPBT | \ @@ -972,15 +973,18 @@ static inline void xfs_dinode_put_rdev(struct xfs_dinode *dip, xfs_dev_t rdev) #define XFS_DIFLAG2_REFLINK_BIT 1 /* file's blocks may be shared */ #define XFS_DIFLAG2_COWEXTSIZE_BIT 2 /* copy on write extent size hint */ #define XFS_DIFLAG2_BIGTIME_BIT 3 /* big timestamps */ +/* data extent mappings for regular files must be aligned to extent size hint */ +#define XFS_DIFLAG2_FORCEALIGN_BIT 5
#define XFS_DIFLAG2_DAX (1 << XFS_DIFLAG2_DAX_BIT) #define XFS_DIFLAG2_REFLINK (1 << XFS_DIFLAG2_REFLINK_BIT) #define XFS_DIFLAG2_COWEXTSIZE (1 << XFS_DIFLAG2_COWEXTSIZE_BIT) #define XFS_DIFLAG2_BIGTIME (1 << XFS_DIFLAG2_BIGTIME_BIT) +#define XFS_DIFLAG2_FORCEALIGN (1 << XFS_DIFLAG2_FORCEALIGN_BIT)
#define XFS_DIFLAG2_ANY \ (XFS_DIFLAG2_DAX | XFS_DIFLAG2_REFLINK | XFS_DIFLAG2_COWEXTSIZE | \ - XFS_DIFLAG2_BIGTIME) + XFS_DIFLAG2_BIGTIME | XFS_DIFLAG2_FORCEALIGN)
static inline bool xfs_dinode_has_bigtime(const struct xfs_dinode *dip) { diff --git a/fs/xfs/libxfs/xfs_inode_buf.c b/fs/xfs/libxfs/xfs_inode_buf.c index 0970ae3fe538..dd9e5de65d52 100644 --- a/fs/xfs/libxfs/xfs_inode_buf.c +++ b/fs/xfs/libxfs/xfs_inode_buf.c @@ -574,6 +574,14 @@ xfs_dinode_verify( !xfs_has_bigtime(mp)) return __this_address;
+ if (flags2 & XFS_DIFLAG2_FORCEALIGN) { + fa = xfs_inode_validate_forcealign(mp, mode, flags, + be32_to_cpu(dip->di_extsize), + be32_to_cpu(dip->di_cowextsize)); + if (fa) + return fa; + } + return NULL; }
@@ -699,3 +707,35 @@ xfs_inode_validate_cowextsize(
return NULL; } + +/* Validate the forcealign inode flag */ +xfs_failaddr_t +xfs_inode_validate_forcealign( + struct xfs_mount *mp, + uint16_t mode, + uint16_t flags, + uint32_t extsize, + uint32_t cowextsize) +{ + /* superblock rocompat feature flag */ + if (!xfs_has_forcealign(mp)) + return __this_address; + + /* Only regular files and directories */ + if (!S_ISDIR(mode) && !S_ISREG(mode)) + return __this_address; + + /* Doesn't apply to realtime files */ + if (flags & XFS_DIFLAG_REALTIME) + return __this_address; + + /* Requires a non-zero power-of-2 extent size hint */ + if (extsize == 0 || !is_power_of_2(extsize)) + return __this_address; + + /* Requires no cow extent size hint */ + if (cowextsize != 0) + return __this_address; + + return NULL; +} diff --git a/fs/xfs/libxfs/xfs_inode_buf.h b/fs/xfs/libxfs/xfs_inode_buf.h index 05c3640e135a..1bcf1415a4b5 100644 --- a/fs/xfs/libxfs/xfs_inode_buf.h +++ b/fs/xfs/libxfs/xfs_inode_buf.h @@ -62,6 +62,9 @@ xfs_failaddr_t xfs_inode_validate_extsize(struct xfs_mount *mp, xfs_failaddr_t xfs_inode_validate_cowextsize(struct xfs_mount *mp, uint32_t cowextsize, uint16_t mode, uint16_t flags, uint64_t flags2); +xfs_failaddr_t xfs_inode_validate_forcealign(struct xfs_mount *mp, + uint16_t mode, uint16_t flags, uint32_t extsize, + uint32_t cowextsize);
static inline uint64_t xfs_inode_encode_bigtime(struct timespec64 tv) { diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c index c099ccf2787d..a016ba008019 100644 --- a/fs/xfs/libxfs/xfs_sb.c +++ b/fs/xfs/libxfs/xfs_sb.c @@ -116,6 +116,8 @@ xfs_sb_version_to_features( features |= XFS_FEAT_REFLINK; if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_INOBTCNT) features |= XFS_FEAT_INOBTCNT; + if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_FORCEALIGN) + features |= XFS_FEAT_FORCEALIGN; if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_FTYPE) features |= XFS_FEAT_FTYPE; if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_SPINODES) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 268bbc2d978b..c841a086cbce 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -643,6 +643,8 @@ _xfs_dic2xflags( flags |= FS_XFLAG_DAX; if (di_flags2 & XFS_DIFLAG2_COWEXTSIZE) flags |= FS_XFLAG_COWEXTSIZE; + if (di_flags2 & XFS_DIFLAG2_FORCEALIGN) + flags |= FS_XFLAG_FORCEALIGN; }
if (has_attr) @@ -759,6 +761,17 @@ xfs_inode_inherit_flags2( } if (pip->i_d.di_flags2 & XFS_DIFLAG2_DAX) ip->i_d.di_flags2 |= XFS_DIFLAG2_DAX; + if (pip->i_d.di_flags2 & XFS_DIFLAG2_FORCEALIGN) + ip->i_d.di_flags2 |= XFS_DIFLAG2_FORCEALIGN; + + if (ip->i_d.di_flags2 & XFS_DIFLAG2_FORCEALIGN) { + xfs_failaddr_t failaddr; + failaddr = xfs_inode_validate_forcealign(ip->i_mount, + VFS_I(ip)->i_mode, ip->i_d.di_flags, ip->i_d.di_extsize, + ip->i_d.di_cowextsize); + if (failaddr) + ip->i_d.di_flags2 &= ~XFS_DIFLAG2_FORCEALIGN; + } }
/* diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index b552daae323f..53a2b90486f6 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -268,6 +268,10 @@ static inline bool xfs_inode_has_bigtime(struct xfs_inode *ip) return ip->i_d.di_flags2 & XFS_DIFLAG2_BIGTIME; }
+static inline bool xfs_inode_forcealign(struct xfs_inode *ip) +{ + return ip->i_d.di_flags2 & XFS_DIFLAG2_FORCEALIGN; +} /* * Return the buftarg used for data allocations on a given inode. */ diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c index 2337eb272235..a53038b8f736 100644 --- a/fs/xfs/xfs_ioctl.c +++ b/fs/xfs/xfs_ioctl.c @@ -1198,6 +1198,8 @@ xfs_flags2diflags2( di_flags2 |= XFS_DIFLAG2_DAX; if (xflags & FS_XFLAG_COWEXTSIZE) di_flags2 |= XFS_DIFLAG2_COWEXTSIZE; + if (xflags & FS_XFLAG_FORCEALIGN) + di_flags2 |= XFS_DIFLAG2_FORCEALIGN;
return di_flags2; } @@ -1236,6 +1238,22 @@ xfs_ioctl_setattr_xflags( if (di_flags2 && !xfs_has_v3inodes(mp)) return -EINVAL;
+ /* + * Force-align requires a nonzero extent size hint and a zero cow + * extent size hint. It doesn't apply to realtime files. + */ + if (fa->fsx_xflags & FS_XFLAG_FORCEALIGN) { + if (!xfs_has_forcealign(mp)) + return -EINVAL; + if (fa->fsx_xflags & FS_XFLAG_COWEXTSIZE) + return -EINVAL; + if (!(fa->fsx_xflags & (FS_XFLAG_EXTSIZE | + FS_XFLAG_EXTSZINHERIT))) + return -EINVAL; + if (fa->fsx_xflags & FS_XFLAG_REALTIME) + return -EINVAL; + } + ip->i_d.di_flags = xfs_flags2diflags(ip, fa->fsx_xflags); ip->i_d.di_flags2 = di_flags2;
@@ -1339,6 +1357,9 @@ xfs_ioctl_setattr_check_extsize( struct xfs_mount *mp = ip->i_mount; xfs_extlen_t size; xfs_fsblock_t extsize_fsb; + xfs_failaddr_t failaddr; + uint16_t new_diflags; + uint16_t new_diflags2;
if (S_ISREG(VFS_I(ip)->i_mode) && ip->i_df.if_nextents && ((ip->i_d.di_extsize << mp->m_sb.sb_blocklog) != fa->fsx_extsize)) @@ -1363,6 +1384,17 @@ xfs_ioctl_setattr_check_extsize( if (fa->fsx_extsize % size) return -EINVAL;
+ new_diflags = xfs_flags2diflags(ip, fa->fsx_xflags); + new_diflags2 = xfs_flags2diflags2(ip, fa->fsx_xflags); + if (new_diflags2 & XFS_DIFLAG2_FORCEALIGN) { + failaddr = xfs_inode_validate_forcealign(ip->i_mount, + VFS_I(ip)->i_mode, new_diflags, + XFS_B_TO_FSB(mp, fa->fsx_extsize), + XFS_B_TO_FSB(mp, fa->fsx_cowextsize)); + if (failaddr) + return -EINVAL; + } + return 0; }
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index 21547ff97b5a..bc0ed9247f47 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -274,6 +274,7 @@ typedef struct xfs_mount { #define XFS_FEAT_INOBTCNT (1ULL << 23) /* inobt block counts */ #define XFS_FEAT_BIGTIME (1ULL << 24) /* large timestamps */ #define XFS_FEAT_NEEDSREPAIR (1ULL << 25) /* needs xfs_repair */ +#define XFS_FEAT_FORCEALIGN (1ULL << 27) /* aligned file data extents */
/* Mount features */ #define XFS_FEAT_NOATTR2 (1ULL << 48) /* disable attr2 creation */ @@ -336,6 +337,7 @@ __XFS_HAS_FEAT(realtime, REALTIME) __XFS_HAS_FEAT(inobtcounts, INOBTCNT) __XFS_HAS_FEAT(bigtime, BIGTIME) __XFS_HAS_FEAT(needsrepair, NEEDSREPAIR) +__XFS_HAS_FEAT(forcealign, FORCEALIGN)
/* * Mount features diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 502fb08bfd38..cc7962438f26 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1639,6 +1639,7 @@ xfs_fc_fill_super( "DAX unsupported by block device. Turning off DAX."); xfs_mount_set_dax_mode(mp, XFS_DAX_NEVER); } + if (xfs_has_reflink(mp)) { xfs_alert(mp, "DAX and reflink cannot be used together!"); @@ -1657,6 +1658,10 @@ xfs_fc_fill_super( } }
+ if (xfs_has_forcealign(mp)) + xfs_warn(mp, +"EXPERIMENTAL forced data extent alignment feature in use. Use at your own risk!"); + if (xfs_has_reflink(mp)) { if (mp->m_sb.sb_rblocks) { xfs_alert(mp, diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 78f4091ed188..822c9f4c4ce3 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -140,6 +140,8 @@ struct fsxattr { #define FS_XFLAG_FILESTREAM 0x00004000 /* use filestream allocator */ #define FS_XFLAG_DAX 0x00008000 /* use DAX for IO */ #define FS_XFLAG_COWEXTSIZE 0x00010000 /* CoW extent size allocator hint */ +/* data extent mappings for regular files must be aligned to extent size hint */ +#define FS_XFLAG_FORCEALIGN 0x00020000 #define FS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */
/* the read-only stuff doesn't really belong here, but any other place is
From: "Darrick J. Wong" djwong@kernel.org
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Enable this feature.
Signed-off-by: "Darrick J. Wong" djwong@kernel.org Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/libxfs/xfs_format.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/libxfs/xfs_format.h b/fs/xfs/libxfs/xfs_format.h index fb537bb7dba5..71629d6dfd1b 100644 --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -358,7 +358,8 @@ xfs_sb_has_compat_feature( (XFS_SB_FEAT_RO_COMPAT_FINOBT | \ XFS_SB_FEAT_RO_COMPAT_RMAPBT | \ XFS_SB_FEAT_RO_COMPAT_REFLINK| \ - XFS_SB_FEAT_RO_COMPAT_INOBTCNT) + XFS_SB_FEAT_RO_COMPAT_INOBTCNT | \ + XFS_SB_FEAT_RO_COMPAT_FORCEALIGN) #define XFS_SB_FEAT_RO_COMPAT_UNKNOWN ~XFS_SB_FEAT_RO_COMPAT_ALL static inline bool xfs_sb_has_ro_compat_feature(
From: "Darrick J. Wong" djwong@kernel.org
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
The existing extsize hint code already did the work of expanding file range mapping requests so that the range is aligned to the hint value. Now add the code we need to guarantee that the space allocations are also always aligned.
XXX: still need to check all this with reflink
Signed-off-by: "Darrick J. Wong" djwong@kernel.org Co-developed-by: John Garry john.g.garry@oracle.com Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/libxfs/xfs_bmap.c | 16 ++++++++++++++-- fs/xfs/xfs_iomap.c | 4 +++- 2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 15e9e335d167..064ce4588d4a 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -3487,6 +3487,18 @@ xfs_bmap_btalloc( args.fsbno = ap->blkno; args.oinfo = XFS_RMAP_OINFO_SKIP_UPDATE;
+ /* + * xfs_get_cowextsz_hint() returns extsz_hint for when forcealign is + * set as forcealign and cowextsz_hint are mutually exclusive + */ + if (xfs_inode_forcealign(ap->ip) && align) { + args.alignment = align; + if (stripe_align == 0 || stripe_align % align) + stripe_align = align; + } else { + args.alignment = 1; + } + /* Trim the allocation back to the maximum an AG can fit. */ args.maxlen = min(ap->length, mp->m_ag_max_usable); blen = 0; @@ -3577,7 +3589,6 @@ xfs_bmap_btalloc( args.minalignslop = 0; } } else { - args.alignment = 1; args.minalignslop = 0; } args.postallocs = 1; @@ -3604,7 +3615,8 @@ xfs_bmap_btalloc( if ((error = xfs_alloc_vextent(&args))) return error; } - if (isaligned && args.fsbno == NULLFSBLOCK) { + + if (isaligned && args.fsbno == NULLFSBLOCK && args.alignment <= 1) { /* * allocation failed, so turn off alignment and * try again. diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index 76285db4aaec..af2fe2e2bd4b 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -167,7 +167,9 @@ xfs_eof_alignment( * If mounted with the "-o swalloc" option the alignment is * increased from the strip unit size to the stripe width. */ - if (mp->m_swidth && xfs_has_swalloc(mp)) + if (xfs_inode_forcealign(ip)) + align = xfs_get_extsz_hint(ip); + else if (mp->m_swidth && xfs_has_swalloc(mp)) align = mp->m_swidth; else if (mp->m_dalign) align = mp->m_dalign;
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
For when forcealign is enabled, we want the EOF to be aligned as well, so do not free EOF blocks.
Add helper function xfs_get_extsz() to get the guaranteed extent size alignment for forcealign enabled. Since this code is only relevant to forcealign and forcealign is not possible for RT yet, ignore RT.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_bmap_util.c | 3 +++ fs/xfs/xfs_inode.c | 14 ++++++++++++++ fs/xfs/xfs_inode.h | 1 + 3 files changed, 18 insertions(+)
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index edf62092125c..94e5d57de432 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -654,6 +654,9 @@ xfs_free_eofblocks( * of the file. If not, then there is nothing to do. */ end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)XFS_ISIZE(ip)); + /* Do not free blocks when forcing extent sizes */ + if (xfs_get_extsz(ip) > 1) + end_fsb = roundup_64(end_fsb, xfs_get_extsz(ip)); last_fsb = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); if (last_fsb <= end_fsb) return 0; diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index c841a086cbce..368cd97ae04c 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -68,6 +68,20 @@ xfs_get_extsz_hint( return 0; }
+/* + * Helper function to extract extent size. It will return a power-of-2, + * as forcealign requires this. + */ +xfs_extlen_t +xfs_get_extsz( + struct xfs_inode *ip) +{ + if (xfs_inode_forcealign(ip) && ip->i_d.di_extsize) + return ip->i_d.di_extsize; + + return 1; +} + /* * Helper function to extract CoW extent size hint from inode. * Between the extent size hint and the CoW extent size hint, we diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index 53a2b90486f6..5d740f793782 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -493,6 +493,7 @@ void xfs_lock_two_inodes(struct xfs_inode *ip0, uint ip0_mode, struct xfs_inode *ip1, uint ip1_mode);
xfs_extlen_t xfs_get_extsz_hint(struct xfs_inode *ip); +xfs_extlen_t xfs_get_extsz(struct xfs_inode *ip); xfs_extlen_t xfs_get_cowextsz_hint(struct xfs_inode *ip);
int xfs_dir_ialloc(struct xfs_trans **, struct xfs_inode *, umode_t,
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
For FS_XFLAG_FORCEALIGN support, we want to treat any sub-extent IO like sub-fsblock DIO, in that we will zero the sub-extent when the mapping is unwritten.
This will be important for atomic writes support, in that atomically writing over a partially written extent would mean that we would need to do the unwritten extent conversion write separately, and the write could no longer be atomic.
It is the task of the FS to set iomap.extent_shift per iter to indicate sub-extent zeroing required.
Maybe a macro like i_blocksize() should be introduced for extent sizes, instead of using extent_shift.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/iomap/direct-io.c | 23 ++++++++++++++++------- include/linux/iomap.h | 1 + 2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 892a4f8109e5..6cdacdd141f6 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -210,15 +210,22 @@ iomap_dio_zero(struct iomap_dio *dio, struct iomap *iomap, loff_t pos, struct page *page = ZERO_PAGE(0); int flags = REQ_SYNC | REQ_IDLE; struct bio *bio; + unsigned size; + unsigned nr_pages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
- bio = bio_alloc(GFP_KERNEL, 1); + bio = bio_alloc(GFP_KERNEL, nr_pages); bio_set_dev(bio, iomap->bdev); bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io;
- get_page(page); - __bio_add_page(bio, page, len, 0); + while (len > 0) { + size = len > PAGE_SIZE ? PAGE_SIZE : len; + get_page(page); + __bio_add_page(bio, page, size, 0); + len -= size; + pos += size; + } bio_set_op_attrs(bio, REQ_OP_WRITE, flags); iomap_dio_submit_bio(dio, iomap, bio, pos); } @@ -228,7 +235,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, struct iomap_dio *dio, struct iomap *iomap) { unsigned int blkbits = blksize_bits(bdev_logical_block_size(iomap->bdev)); - unsigned int fs_block_size = i_blocksize(inode), pad; + unsigned int zeroing_size, pad; unsigned int align = iov_iter_alignment(dio->submit.iter); struct bio *bio; bool need_zeroout = false; @@ -237,6 +244,8 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, size_t copied = 0; size_t orig_count;
+ zeroing_size = i_blocksize(inode) << iomap->extent_shift; + if ((pos | length | align) & ((1 << blkbits) - 1)) return -EINVAL;
@@ -280,7 +289,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length,
if (need_zeroout) { /* zero out from the start of the block to the write offset */ - pad = pos & (fs_block_size - 1); + pad = pos & (zeroing_size - 1); if (pad) iomap_dio_zero(dio, iomap, pos - pad, pad); } @@ -345,9 +354,9 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, if (need_zeroout || ((dio->flags & IOMAP_DIO_WRITE) && pos >= i_size_read(inode))) { /* zero out from the end of the write to the end of the block */ - pad = pos & (fs_block_size - 1); + pad = pos & (zeroing_size - 1); if (pad) - iomap_dio_zero(dio, iomap, pos, fs_block_size - pad); + iomap_dio_zero(dio, iomap, pos, zeroing_size - pad); } out: /* Undo iter limitation to current extent */ diff --git a/include/linux/iomap.h b/include/linux/iomap.h index 0965d5f12858..d14a729d40ce 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -93,6 +93,7 @@ struct iomap { u64 length; /* length of mapping, bytes */ u16 type; /* type of mapping */ u16 flags; /* flags for mapping */ + unsigned int extent_shift; struct block_device *bdev; /* block device for I/O */ struct dax_device *dax_dev; /* dax_dev for dax operations */ void *inline_data;
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Set iomap->extent_shift when sub-extent zeroing is required.
We treat a sub-extent write same as an unaligned write, so we can leverage the existing sub-FSblock unaligned write support, i.e. try a shared lock with IOMAP_DIO_OVERWRITE_ONLY flag, if this fails then try the exclusive lock.
In xfs_iomap_write_unwritten(), FSB calcs are now based on the extsize.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_file.c | 4 ++-- fs/xfs/xfs_iomap.c | 15 +++++++++++++-- 2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 52643eac5d46..1e4da3479432 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -597,8 +597,8 @@ xfs_file_dio_aio_write( * the inode as necessary for EOF zeroing cases and fill out the new * inode size as appropriate. */ - if ((iocb->ki_pos & mp->m_blockmask) || - ((iocb->ki_pos + count) & mp->m_blockmask)) { + if ((iocb->ki_pos & (XFS_FSB_TO_B(mp, xfs_get_extsz(ip)) - 1)) || + ((iocb->ki_pos + count) & (XFS_FSB_TO_B(mp, xfs_get_extsz(ip)) - 1))) { unaligned_io = 1;
/* diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c index af2fe2e2bd4b..dc01689988ab 100644 --- a/fs/xfs/xfs_iomap.c +++ b/fs/xfs/xfs_iomap.c @@ -90,6 +90,7 @@ xfs_bmbt_to_iomap( { struct xfs_mount *mp = ip->i_mount; struct xfs_buftarg *target = xfs_inode_buftarg(ip); + xfs_extlen_t extsz = xfs_get_extsz(ip);
if (unlikely(!xfs_valid_startblock(ip, imap->br_startblock))) return xfs_alert_fsblock_zero(ip, imap); @@ -120,6 +121,8 @@ xfs_bmbt_to_iomap(
iomap->validity_cookie = sequence_cookie; iomap->page_ops = &xfs_iomap_page_ops; + if (extsz > 1) + iomap->extent_shift = ffs(extsz) - 1; return 0; }
@@ -546,11 +549,19 @@ xfs_iomap_write_unwritten( xfs_fsize_t i_size; uint resblks; int error; + xfs_extlen_t extsz = xfs_get_extsz(ip);
trace_xfs_unwritten_convert(ip, offset, count);
- offset_fsb = XFS_B_TO_FSBT(mp, offset); - count_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)offset + count); + if (extsz > 1) { + xfs_extlen_t extsize_bytes = XFS_FSB_TO_B(mp, extsz); + + offset_fsb = XFS_B_TO_FSBT(mp, round_down(offset, extsize_bytes)); + count_fsb = XFS_B_TO_FSB(mp, round_up(offset + count, extsize_bytes)); + } else { + offset_fsb = XFS_B_TO_FSBT(mp, offset); + count_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)offset + count); + } count_fsb = (xfs_filblks_t)(count_fsb - offset_fsb);
/*
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Add a flag indicating that a regular file is enabled for atomic writes.
This is a file attribute that mirrors an ondisk inode flag. Actual support for untorn file writes (for now) depends on both the iflag and the underlying storage devices, which we can only really check at statx and pwritev2() time. This is the same story as FS_XFLAG_DAX, which signals to the fs that we should try to enable the fsdax IO path on the file (instead of the regular page cache), but applications have to query STAT_ATTR_DAX to find out if they really got that IO path.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- include/uapi/linux/fs.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index 822c9f4c4ce3..ca8b6f6c479d 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -142,6 +142,7 @@ struct fsxattr { #define FS_XFLAG_COWEXTSIZE 0x00010000 /* CoW extent size allocator hint */ /* data extent mappings for regular files must be aligned to extent size hint */ #define FS_XFLAG_FORCEALIGN 0x00020000 +#define FS_XFLAG_ATOMICWRITES 0x00040000 /* atomic writes enabled */ #define FS_XFLAG_HASATTR 0x80000000 /* no DIFLAG for this */
/* the read-only stuff doesn't really belong here, but any other place is
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Support atomic writes by producing a single BIO with REQ_ATOMIC flag set.
We rely on the FS to guarantee extent alignment, such that an atomic write should never straddle two or more extents. The FS should also check for validity of an atomic write length/alignment.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/iomap/direct-io.c | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 6cdacdd141f6..063ea9eb8afb 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -234,6 +234,7 @@ static loff_t iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, struct iomap_dio *dio, struct iomap *iomap) { + bool is_atomic = dio->iocb->ki_flags & IOCB_ATOMIC; unsigned int blkbits = blksize_bits(bdev_logical_block_size(iomap->bdev)); unsigned int zeroing_size, pad; unsigned int align = iov_iter_alignment(dio->submit.iter); @@ -307,6 +308,7 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, bio->bi_iter.bi_sector = iomap_sector(iomap, pos); bio->bi_write_hint = dio->iocb->ki_hint; bio->bi_ioprio = dio->iocb->ki_ioprio; + bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io;
@@ -323,8 +325,16 @@ iomap_dio_bio_actor(struct inode *inode, loff_t pos, loff_t length, }
n = bio->bi_iter.bi_size; + if (is_atomic && (n != orig_count)) { + /* This bio should have covered the complete length */ + ret = -EINVAL; + bio_put(bio); + goto out; + } if (dio->flags & IOMAP_DIO_WRITE) { bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE; + if (is_atomic) + bio->bi_opf |= REQ_ATOMIC; if (use_fua) bio->bi_opf |= REQ_FUA; else
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Add initial support for FS_XFLAG_ATOMICWRITES for forcealign enaabled.
Current kernel support for atomic writes is based on HW support (for atomic writes). As such, it is required to ensure extent alignment with atomic_write_unit_max so that an atomic write can result in a single HW-compliant IO operation.
rtvol also guarantees extent alignment, but we are basing support initially on forcealign, which is not supported for rtvol yet.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/libxfs/xfs_format.h | 12 +++++++++--- fs/xfs/libxfs/xfs_sb.c | 2 ++ fs/xfs/xfs_inode.c | 2 ++ fs/xfs/xfs_inode.h | 6 ++++++ fs/xfs/xfs_ioctl.c | 15 +++++++++++++-- fs/xfs/xfs_mount.h | 3 +++ fs/xfs/xfs_super.c | 4 ++++ 7 files changed, 39 insertions(+), 5 deletions(-)
diff --git a/fs/xfs/libxfs/xfs_format.h b/fs/xfs/libxfs/xfs_format.h index 71629d6dfd1b..8e4d49595884 100644 --- a/fs/xfs/libxfs/xfs_format.h +++ b/fs/xfs/libxfs/xfs_format.h @@ -354,12 +354,16 @@ xfs_sb_has_compat_feature( #define XFS_SB_FEAT_RO_COMPAT_REFLINK (1 << 2) /* reflinked files */ #define XFS_SB_FEAT_RO_COMPAT_INOBTCNT (1 << 3) /* inobt block counts */ #define XFS_SB_FEAT_RO_COMPAT_FORCEALIGN (1 << 30) /* aligned file data extents */ +#define XFS_SB_FEAT_RO_COMPAT_ATOMICWRITES (1 << 31) /* atomicwrites enabled */ + #define XFS_SB_FEAT_RO_COMPAT_ALL \ (XFS_SB_FEAT_RO_COMPAT_FINOBT | \ XFS_SB_FEAT_RO_COMPAT_RMAPBT | \ XFS_SB_FEAT_RO_COMPAT_REFLINK| \ - XFS_SB_FEAT_RO_COMPAT_INOBTCNT | \ - XFS_SB_FEAT_RO_COMPAT_FORCEALIGN) + XFS_SB_FEAT_RO_COMPAT_INOBTCNT| \ + XFS_SB_FEAT_RO_COMPAT_FORCEALIGN| \ + XFS_SB_FEAT_RO_COMPAT_ATOMICWRITES) + #define XFS_SB_FEAT_RO_COMPAT_UNKNOWN ~XFS_SB_FEAT_RO_COMPAT_ALL static inline bool xfs_sb_has_ro_compat_feature( @@ -976,16 +980,18 @@ static inline void xfs_dinode_put_rdev(struct xfs_dinode *dip, xfs_dev_t rdev) #define XFS_DIFLAG2_BIGTIME_BIT 3 /* big timestamps */ /* data extent mappings for regular files must be aligned to extent size hint */ #define XFS_DIFLAG2_FORCEALIGN_BIT 5 +#define XFS_DIFLAG2_ATOMICWRITES_BIT 6
#define XFS_DIFLAG2_DAX (1 << XFS_DIFLAG2_DAX_BIT) #define XFS_DIFLAG2_REFLINK (1 << XFS_DIFLAG2_REFLINK_BIT) #define XFS_DIFLAG2_COWEXTSIZE (1 << XFS_DIFLAG2_COWEXTSIZE_BIT) #define XFS_DIFLAG2_BIGTIME (1 << XFS_DIFLAG2_BIGTIME_BIT) #define XFS_DIFLAG2_FORCEALIGN (1 << XFS_DIFLAG2_FORCEALIGN_BIT) +#define XFS_DIFLAG2_ATOMICWRITES (1 << XFS_DIFLAG2_ATOMICWRITES_BIT)
#define XFS_DIFLAG2_ANY \ (XFS_DIFLAG2_DAX | XFS_DIFLAG2_REFLINK | XFS_DIFLAG2_COWEXTSIZE | \ - XFS_DIFLAG2_BIGTIME | XFS_DIFLAG2_FORCEALIGN) + XFS_DIFLAG2_BIGTIME | XFS_DIFLAG2_FORCEALIGN | XFS_DIFLAG2_ATOMICWRITES)
static inline bool xfs_dinode_has_bigtime(const struct xfs_dinode *dip) { diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c index a016ba008019..a4354504986c 100644 --- a/fs/xfs/libxfs/xfs_sb.c +++ b/fs/xfs/libxfs/xfs_sb.c @@ -118,6 +118,8 @@ xfs_sb_version_to_features( features |= XFS_FEAT_INOBTCNT; if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_FORCEALIGN) features |= XFS_FEAT_FORCEALIGN; + if (sbp->sb_features_ro_compat & XFS_SB_FEAT_RO_COMPAT_ATOMICWRITES) + features |= XFS_FEAT_ATOMICWRITES; if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_FTYPE) features |= XFS_FEAT_FTYPE; if (sbp->sb_features_incompat & XFS_SB_FEAT_INCOMPAT_SPINODES) diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 368cd97ae04c..5fafc8f419cc 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -659,6 +659,8 @@ _xfs_dic2xflags( flags |= FS_XFLAG_COWEXTSIZE; if (di_flags2 & XFS_DIFLAG2_FORCEALIGN) flags |= FS_XFLAG_FORCEALIGN; + if (di_flags2 & XFS_DIFLAG2_ATOMICWRITES) + flags |= FS_XFLAG_ATOMICWRITES; }
if (has_attr) diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index 5d740f793782..b5b97be319e6 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -272,6 +272,12 @@ static inline bool xfs_inode_forcealign(struct xfs_inode *ip) { return ip->i_d.di_flags2 & XFS_DIFLAG2_FORCEALIGN; } + +static inline bool xfs_inode_atomicwrites(struct xfs_inode *ip) +{ + return ip->i_d.di_flags2 & XFS_DIFLAG2_ATOMICWRITES; +} + /* * Return the buftarg used for data allocations on a given inode. */ diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c index a53038b8f736..e96ebea20991 100644 --- a/fs/xfs/xfs_ioctl.c +++ b/fs/xfs/xfs_ioctl.c @@ -1200,6 +1200,8 @@ xfs_flags2diflags2( di_flags2 |= XFS_DIFLAG2_COWEXTSIZE; if (xflags & FS_XFLAG_FORCEALIGN) di_flags2 |= XFS_DIFLAG2_FORCEALIGN; + if (xflags & FS_XFLAG_ATOMICWRITES) + di_flags2 |= XFS_DIFLAG2_ATOMICWRITES;
return di_flags2; } @@ -1212,10 +1214,12 @@ xfs_ioctl_setattr_xflags( { struct xfs_mount *mp = ip->i_mount; uint64_t di_flags2; + bool atomic_writes = fa->fsx_xflags & FS_XFLAG_ATOMICWRITES;
- /* Can't change realtime flag if any extents are allocated. */ + /* Can't change realtime or atomic flag if any extents are allocated. */ if ((ip->i_df.if_nextents || ip->i_delayed_blks) && - XFS_IS_REALTIME_INODE(ip) != (fa->fsx_xflags & FS_XFLAG_REALTIME)) + (XFS_IS_REALTIME_INODE(ip) != (fa->fsx_xflags & FS_XFLAG_REALTIME) || + atomic_writes != xfs_inode_atomicwrites(ip))) return -EINVAL;
/* If realtime flag is set then must have realtime device */ @@ -1254,6 +1258,13 @@ xfs_ioctl_setattr_xflags( return -EINVAL; }
+ if (atomic_writes) { + if (!xfs_has_atomicwrites(mp)) + return -EINVAL; + if (!(fa->fsx_xflags & FS_XFLAG_FORCEALIGN)) + return -EINVAL; + } + ip->i_d.di_flags = xfs_flags2diflags(ip, fa->fsx_xflags); ip->i_d.di_flags2 = di_flags2;
diff --git a/fs/xfs/xfs_mount.h b/fs/xfs/xfs_mount.h index bc0ed9247f47..888d6bf9bea7 100644 --- a/fs/xfs/xfs_mount.h +++ b/fs/xfs/xfs_mount.h @@ -275,6 +275,8 @@ typedef struct xfs_mount { #define XFS_FEAT_BIGTIME (1ULL << 24) /* large timestamps */ #define XFS_FEAT_NEEDSREPAIR (1ULL << 25) /* needs xfs_repair */ #define XFS_FEAT_FORCEALIGN (1ULL << 27) /* aligned file data extents */ +#define XFS_FEAT_ATOMICWRITES (1ULL << 28) /* atomic writes support */ +
/* Mount features */ #define XFS_FEAT_NOATTR2 (1ULL << 48) /* disable attr2 creation */ @@ -338,6 +340,7 @@ __XFS_HAS_FEAT(inobtcounts, INOBTCNT) __XFS_HAS_FEAT(bigtime, BIGTIME) __XFS_HAS_FEAT(needsrepair, NEEDSREPAIR) __XFS_HAS_FEAT(forcealign, FORCEALIGN) +__XFS_HAS_FEAT(atomicwrites, ATOMICWRITES)
/* * Mount features diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index cc7962438f26..d43f76a4b99a 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1662,6 +1662,10 @@ xfs_fc_fill_super( xfs_warn(mp, "EXPERIMENTAL forced data extent alignment feature in use. Use at your own risk!");
+ if (xfs_has_atomicwrites(mp)) + xfs_warn(mp, +"EXPERIMENTAL atomicwrites feature in use. Use at your own risk!"); + if (xfs_has_reflink(mp)) { if (mp->m_sb.sb_rblocks) { xfs_alert(mp,
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Support providing info on atomic write unit min and max for an inode.
For simplicity, currently we limit the min at the FS block size, but a lower limit could be supported in future. This is required by iomap DIO.
The atomic write unit min and max is limited by the guaranteed extent alignment for the inode.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_iops.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+)
diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index a527a544a684..8446b145130c 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -507,6 +507,37 @@ xfs_stat_blksize( return PAGE_SIZE; }
+static void +xfs_get_atomic_write_attr( + struct xfs_inode *ip, + unsigned int *unit_min, + unsigned int *unit_max) +{ + xfs_extlen_t extsz = xfs_get_extsz(ip); + struct xfs_buftarg *target = xfs_inode_buftarg(ip); + struct block_device *bdev = target->bt_bdev; + struct request_queue *q = bdev_get_queue(bdev); + struct xfs_mount *mp = ip->i_mount; + struct xfs_sb *sbp = &mp->m_sb; + unsigned int awu_min, awu_max; + unsigned int extsz_bytes = XFS_FSB_TO_B(mp, extsz); + + awu_min = queue_atomic_write_unit_min_bytes(q); + awu_max = queue_atomic_write_unit_max_bytes(q); + + if (sbp->sb_blocksize > awu_max || awu_min > sbp->sb_blocksize || + !xfs_inode_atomicwrites(ip)) { + *unit_min = 0; + *unit_max = 0; + return; + } + + /* Floor at FS block size */ + *unit_min = max(sbp->sb_blocksize, awu_min); + + *unit_max = min(extsz_bytes, awu_max); +} + STATIC int xfs_vn_getattr( const struct path *path, @@ -564,6 +595,15 @@ xfs_vn_getattr( stat->blksize = BLKDEV_IOSIZE; stat->rdev = inode->i_rdev; break; + case S_IFREG: + if (request_mask & STATX_WRITE_ATOMIC) { + unsigned int unit_min, unit_max; + + xfs_get_atomic_write_attr(ip, &unit_min, &unit_max); + generic_fill_statx_atomic_writes(stat, + unit_min, unit_max); + } + fallthrough; default: stat->blksize = xfs_stat_blksize(ip); stat->rdev = 0;
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Validate that an atomic write adheres to length/offset rules. Since we require extent alignment for atomic writes, this effectively also enforces that the BIO which iomap produces is aligned.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_file.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 1e4da3479432..75d7ca8d2a4a 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -586,6 +586,14 @@ xfs_file_dio_aio_write( size_t count = iov_iter_count(from); struct xfs_buftarg *target = xfs_inode_buftarg(ip);
+ if (iocb->ki_flags & IOCB_ATOMIC) { + if (!generic_atomic_write_valid(iocb->ki_pos, from, + i_blocksize(inode), + XFS_FSB_TO_B(mp, xfs_get_extsz(ip)))) { + return -EINVAL; + } + } + /* DIO must be aligned to device logical sector size */ if ((iocb->ki_pos | count) & target->bt_logical_sectormask) return -EINVAL;
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
For when an inode is enabled for atomic writes, set FMODE_CAN_ATOMIC_WRITE flag. We check for direct I/O and also check that the bdev can actually support atomic writes.
We rely on the block layer to reject atomic writes which exceed the bdev request_queue limits, so don't bother checking any such thing here.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_file.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 75d7ca8d2a4a..e77efd357301 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1187,6 +1187,25 @@ xfs_file_remap_range( return remapped > 0 ? remapped : ret; }
+static bool xfs_file_open_can_atomicwrite( + struct inode *inode, + struct file *file) +{ + struct xfs_inode *ip = XFS_I(inode); + struct xfs_buftarg *target = xfs_inode_buftarg(ip); + + if (!(file->f_flags & O_DIRECT)) + return false; + + if (!xfs_inode_atomicwrites(ip)) + return false; + + if (!bdev_can_atomic_write(target->bt_bdev)) + return false; + + return true; +} + STATIC int xfs_file_open( struct inode *inode, @@ -1197,6 +1216,8 @@ xfs_file_open( if (xfs_is_shutdown(XFS_M(inode->i_sb))) return -EIO; file->f_mode |= FMODE_NOWAIT | FMODE_BUF_RASYNC; + if (xfs_file_open_can_atomicwrite(inode, file)) + file->f_mode |= FMODE_CAN_ATOMIC_WRITE; return 0; }
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
--------------------------------
Support FS_IOC_SETATOMIC ioctl, if filesystem and file could support atomic write, then enable file's atomic write function. Database like Mysql could use this ioctl check and enable file atomic write.
Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_ioctl.c | 45 +++++++++++++++++++++++++++++++++++++++++ include/uapi/linux/fs.h | 1 + 2 files changed, 46 insertions(+)
diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c index e96ebea20991..fbaf3d0ddd8e 100644 --- a/fs/xfs/xfs_ioctl.c +++ b/fs/xfs/xfs_ioctl.c @@ -2112,6 +2112,26 @@ xfs_fs_eofblocks_from_user( return 0; }
+static int +xfs_ioc_set_atomic_write( + struct xfs_inode *ip) +{ + struct xfs_trans *tp; + int error; + + tp = xfs_ioctl_setattr_get_trans(ip, NULL); + if (IS_ERR(tp)) { + error = PTR_ERR(tp); + goto out; + } + + ip->i_d.di_flags2 |= XFS_DIFLAG2_ATOMICWRITES; + + error = xfs_trans_commit(tp); +out: + return error; +} + /* * Note: some of the ioctl's return positive numbers as a * byte count indicating success, such as readlink_by_handle. @@ -2139,6 +2159,31 @@ xfs_file_ioctl( return xfs_ioc_getlabel(mp, arg); case FS_IOC_SETFSLABEL: return xfs_ioc_setlabel(filp, mp, arg); + case FS_IOC_SETATOMIC: + if (!xfs_has_atomicwrites(mp)) + return -1; + if (!S_ISREG(inode->i_mode)) + return -1; + if (xfs_inode_atomicwrites(ip)) + return 0; + if (!xfs_inode_forcealign(ip)) + return -1; + + xfs_ilock(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); + error = xfs_ioc_set_atomic_write(ip); + xfs_iunlock(ip, XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL); + if (error) { + xfs_alert(mp, "%s: set ino 0x%llx atomic write fail!", + __func__, XFS_I(inode)->i_ino); + return -1; + } else { + struct xfs_buftarg *target = xfs_inode_buftarg(ip); + + if ((filp->f_flags & O_DIRECT) && + bdev_can_atomic_write(target->bt_bdev)) + filp->f_mode |= FMODE_CAN_ATOMIC_WRITE; + return 0; + } case XFS_IOC_ALLOCSP: case XFS_IOC_FREESP: case XFS_IOC_ALLOCSP64: diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h index ca8b6f6c479d..332b0709756b 100644 --- a/include/uapi/linux/fs.h +++ b/include/uapi/linux/fs.h @@ -217,6 +217,7 @@ struct fsxattr { #define FS_IOC_FSSETXATTR _IOW('X', 32, struct fsxattr) #define FS_IOC_GETFSLABEL _IOR(0x94, 49, char[FSLABEL_MAX]) #define FS_IOC_SETFSLABEL _IOW(0x94, 50, char[FSLABEL_MAX]) +#define FS_IOC_SETATOMIC _IOW(0x95, 2, uint)
/* * Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS)
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
--------------------------------
If the file support atomic write, while the user initiates a direct write, convert the write to atomic write, it makes database like Mysql could use atomic write without any other more modify. This also causes some direct writes(such as offset misalignment and invalid length) to the files which with atomic write enabled to return fails.
Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_file.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index e77efd357301..748aa26793dc 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -811,6 +811,9 @@ xfs_file_write_iter( return xfs_file_dax_write(iocb, from);
if (iocb->ki_flags & IOCB_DIRECT) { + + if (xfs_inode_atomicwrites(ip)) + iocb->ki_flags |= IOCB_ATOMIC; /* * Allow a directio write to fall back to a buffered * write *only* in the case that we're doing a reflink
maillist inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
--------------------------------
Since introduce "fs: xfs: Introduce FORCEALIGN inode flag", generic/079 report fail, the reason is that vfs_ioc_fssetxattr_check() would clear FS_XFLAG_EXTSIZE and FS_XFLAG_EXTSZINHERIT due to fa not update completely.
Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_ioctl.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c index fbaf3d0ddd8e..939b91124ce1 100644 --- a/fs/xfs/xfs_ioctl.c +++ b/fs/xfs/xfs_ioctl.c @@ -1649,6 +1649,10 @@ xfs_ioc_setxflags( }
xfs_fill_fsxattr(ip, false, &old_fa); + fa.fsx_extsize = old_fa.fsx_extsize; + fa.fsx_cowextsize = old_fa.fsx_cowextsize; + fa.fsx_projid = old_fa.fsx_projid; + fa.fsx_nextents = old_fa.fsx_nextents; error = vfs_ioc_fssetxattr_check(VFS_I(ip), &old_fa, &fa); if (error) { xfs_trans_cancel(tp);
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
For when forcealign is enabled, we want the alignment mask to cover an aligned extent, similar to rtvol.
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_file.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 748aa26793dc..10a829ffd908 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -60,7 +60,10 @@ xfs_is_falloc_aligned( } mask = XFS_FSB_TO_B(mp, mp->m_sb.sb_rextsize) - 1; } else { - mask = mp->m_sb.sb_blocksize - 1; + if (xfs_inode_forcealign(ip) && ip->i_d.di_extsize > 1) + mask = (mp->m_sb.sb_blocksize * ip->i_d.di_extsize) - 1; + else + mask = mp->m_sb.sb_blocksize - 1; }
return !((pos | len) & mask);
From: John Garry john.g.garry@oracle.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/all/20240326133813.3224593-1-john.g.garry@oracle.com...
--------------------------------
Like we already do for rtvol, only free full extents for forcealign in xfs_free_file_space().
Signed-off-by: John Garry john.g.garry@oracle.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_bmap_util.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c index 94e5d57de432..5879f03b8660 100644 --- a/fs/xfs/xfs_bmap_util.c +++ b/fs/xfs/xfs_bmap_util.c @@ -928,8 +928,11 @@ xfs_free_file_space( startoffset_fsb = XFS_B_TO_FSB(mp, offset); endoffset_fsb = XFS_B_TO_FSBT(mp, offset + len);
- /* We can only free complete realtime extents. */ - if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) { + /* Free only complete extents. */ + if (xfs_inode_forcealign(ip) && ip->i_d.di_extsize > 1) { + startoffset_fsb = roundup_64(startoffset_fsb, ip->i_d.di_extsize); + endoffset_fsb = rounddown_64(endoffset_fsb, ip->i_d.di_extsize); + } else if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) { startoffset_fsb = roundup_64(startoffset_fsb, mp->m_sb.sb_rextsize); endoffset_fsb = rounddown_64(endoffset_fsb,
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
--------------------------------
When unmapping a file with forced alignment, the unmap operation must align to the extent hint size. Within a single extent hint size unit, there cannot be both written and unwritten extents.
Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/libxfs/xfs_bmap.c | 6 ++++++ 1 file changed, 6 insertions(+)
diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index 064ce4588d4a..704d457eed93 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -5288,6 +5288,12 @@ __xfs_bunmapi( XFS_STATS_INC(mp, xs_blk_unmap); isrt = (whichfork == XFS_DATA_FORK) && XFS_IS_REALTIME_INODE(ip); end = start + len; + if (xfs_inode_forcealign(ip) && ip->i_d.di_extsize > 1 + && S_ISREG(VFS_I(ip)->i_mode)) { + start = roundup_64(start, ip->i_d.di_extsize); + end = rounddown_64(end, ip->i_d.di_extsize); + len = end - start; + }
if (!xfs_iext_lookup_extent_before(ip, ifp, &end, &icur, &got)) { *rlen = 0;
From: Zhang Yi yi.zhang@huawei.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/linux-fsdevel/20240529095206.2568162-1-yi.zhang@huaw...
--------------------------------
Add a new helper rem_u64() to only get the remainder of unsigned 64bit divide with 32bit divisor.
Signed-off-by: Zhang Yi yi.zhang@huawei.com Signed-off-by: Long Li leo.lilong@huawei.com --- include/linux/math64.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+)
diff --git a/include/linux/math64.h b/include/linux/math64.h index 66deb1fdc2ef..b5c4d1df08e5 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -3,6 +3,7 @@ #define _LINUX_MATH64_H
#include <linux/types.h> +#include <linux/log2.h> #include <vdso/math64.h> #include <asm/div64.h>
@@ -11,6 +12,20 @@ #define div64_long(x, y) div64_s64((x), (y)) #define div64_ul(x, y) div64_u64((x), (y))
+/** + * rem_u64 - remainder of unsigned 64bit divide with 32bit divisor + * @dividend: unsigned 64bit dividend + * @divisor: unsigned 32bit divisor + * + * Return: dividend % divisor + */ +static inline u32 rem_u64(u64 dividend, u32 divisor) +{ + if (is_power_of_2(divisor)) + return dividend & (divisor - 1); + return dividend % divisor; +} + /** * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder * @dividend: unsigned 64bit dividend @@ -85,6 +100,15 @@ static inline s64 div64_s64(s64 dividend, s64 divisor) #define div64_long(x, y) div_s64((x), (y)) #define div64_ul(x, y) div_u64((x), (y))
+#ifndef rem_u64 +static inline u32 rem_u64(u64 dividend, u32 divisor) +{ + if (is_power_of_2(divisor)) + return dividend & (divisor - 1); + return do_div(dividend, divisor); +} +#endif + #ifndef div_u64_rem static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) {
From: Zhang Yi yi.zhang@huawei.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/linux-fsdevel/20240529095206.2568162-1-yi.zhang@huaw...
--------------------------------
iomap_truncate_page() always assumes the block size of the truncating inode is i_blocksize(), this is not always true for some filesystems, e.g. XFS does extent size alignment for realtime inodes. Drop this assumption and pass the block size for zeroing into iomap_truncate_page(), allow filesystems to indicate the correct block size.
Suggested-by: Dave Chinner david@fromorbit.com Signed-off-by: Zhang Yi yi.zhang@huawei.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/iomap/buffered-io.c | 8 ++++---- fs/xfs/xfs_iops.c | 5 +++-- include/linux/iomap.h | 4 ++-- 3 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 569296ad9215..d1544ffbe230 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -17,6 +17,7 @@ #include <linux/bio.h> #include <linux/sched/signal.h> #include <linux/migrate.h> +#include <linux/math64.h> #include "trace.h"
#include "../internal.h" @@ -1044,11 +1045,10 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, EXPORT_SYMBOL_GPL(iomap_zero_range);
int -iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, - const struct iomap_ops *ops) +iomap_truncate_page(struct inode *inode, loff_t pos, unsigned int blocksize, + bool *did_zero, const struct iomap_ops *ops) { - unsigned int blocksize = i_blocksize(inode); - unsigned int off = pos & (blocksize - 1); + unsigned int off = rem_u64(pos, blocksize);
/* Block boundary? Nothing to do */ if (!off) diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index 8446b145130c..d546818feab7 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -809,6 +809,7 @@ xfs_setattr_size( int error; uint lock_flags = 0; bool did_zeroing = false; + unsigned int blocksize = i_blocksize(inode);
ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); @@ -870,8 +871,8 @@ xfs_setattr_size( newsize); if (error) return error; - error = iomap_truncate_page(inode, newsize, &did_zeroing, - &xfs_buffered_write_iomap_ops); + error = iomap_truncate_page(inode, newsize, blocksize, + &did_zeroing, &xfs_buffered_write_iomap_ops); }
if (error) diff --git a/include/linux/iomap.h b/include/linux/iomap.h index d14a729d40ce..1b6e22741d43 100644 --- a/include/linux/iomap.h +++ b/include/linux/iomap.h @@ -207,8 +207,8 @@ int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, const struct iomap_ops *ops); int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, const struct iomap_ops *ops); -int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero, - const struct iomap_ops *ops); +int iomap_truncate_page(struct inode *inode, loff_t pos, unsigned int blocksize, + bool *did_zero, const struct iomap_ops *ops); vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops); int iomap_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
From: Zhang Yi yi.zhang@huawei.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/linux-fsdevel/20240529095206.2568162-1-yi.zhang@huaw...
--------------------------------
When truncating down an inode, we call xfs_truncate_page() to zero out the tail partial block that beyond new EOF, which prevents exposing stale data. But xfs_truncate_page() always assumes the blocksize is i_blocksize(inode), it's not always true if we have a large allocation unit for a file and we should aligned to this unitsize, e.g. realtime inode should aligned to the rtextsize.
Current xfs_setattr_size() can't support zeroing out a large alignment size on trucate down since the process order is wrong. We first do zero out through xfs_truncate_page(), and then update inode size through truncate_setsize() immediately. If the zeroed range is larger than a folio, the write back path would not write back zeroed pagecache beyond the EOF folio, so it doesn't write zeroes to the entire tail extent and could expose stale data after an appending write into the next aligned extent.
We need to adjust the order to zero out tail aligned blocks, write back zeroed or cached data, update i_size and drop cache beyond aligned EOF block, preparing for the fix of realtime inode and supporting the upcoming forced alignment feature.
Signed-off-by: Zhang Yi yi.zhang@huawei.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_iops.c | 115 ++++++++++++++++++++++++---------------------- 1 file changed, 59 insertions(+), 56 deletions(-)
diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index d546818feab7..673f066d3ad4 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -809,7 +809,7 @@ xfs_setattr_size( int error; uint lock_flags = 0; bool did_zeroing = false; - unsigned int blocksize = i_blocksize(inode); + bool write_back = false;
ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL)); ASSERT(xfs_isilocked(ip, XFS_MMAPLOCK_EXCL)); @@ -846,21 +846,10 @@ xfs_setattr_size( */ inode_dio_wait(inode);
- /* - * File data changes must be complete before we start the transaction to - * modify the inode. This needs to be done before joining the inode to - * the transaction because the inode cannot be unlocked once it is a - * part of the transaction. - * - * Start with zeroing any data beyond EOF that we may expose on file - * extension, or zeroing out the rest of the block on a downward - * truncate. - */ - if (newsize > oldsize) { - trace_xfs_zero_eof(ip, oldsize, newsize - oldsize); - error = iomap_zero_range(inode, oldsize, newsize - oldsize, - &did_zeroing, &xfs_buffered_write_iomap_ops); - } else { + write_back = newsize > ip->i_d.di_size && oldsize != ip->i_d.di_size; + if (newsize < oldsize) { + unsigned int blocksize = i_blocksize(inode); + /* * iomap won't detect a dirty page over an unwritten block (or a * cow block over a hole) and subsequently skips zeroing the @@ -868,53 +857,67 @@ xfs_setattr_size( * convert the block before the pagecache truncate. */ error = filemap_write_and_wait_range(inode->i_mapping, newsize, - newsize); + roundup_64(newsize, blocksize) - 1); if (error) return error; error = iomap_truncate_page(inode, newsize, blocksize, &did_zeroing, &xfs_buffered_write_iomap_ops); - }
- if (error) - return error; + /* + * We are going to log the inode size change in this transaction + * so any previous writes that are beyond the on disk EOF and + * the new EOF that have not been written out need to be written + * here. If we do not write the data out, we expose ourselves + * to the null files problem. Note that this includes any block + * zeroing we did above; otherwise those blocks may not be + * zeroed after a crash. + */ + if (did_zeroing || write_back) { + error = filemap_write_and_wait_range(inode->i_mapping, + min_t(loff_t, ip->i_d.di_size, newsize), + roundup_64(newsize, blocksize) - 1); + if (error) + return error; + }
- /* - * We've already locked out new page faults, so now we can safely remove - * pages from the page cache knowing they won't get refaulted until we - * drop the XFS_MMAP_EXCL lock after the extent manipulations are - * complete. The truncate_setsize() call also cleans partial EOF page - * PTEs on extending truncates and hence ensures sub-page block size - * filesystems are correctly handled, too. - * - * We have to do all the page cache truncate work outside the - * transaction context as the "lock" order is page lock->log space - * reservation as defined by extent allocation in the writeback path. - * Hence a truncate can fail with ENOMEM from xfs_trans_alloc(), but - * having already truncated the in-memory version of the file (i.e. made - * user visible changes). There's not much we can do about this, except - * to hope that the caller sees ENOMEM and retries the truncate - * operation. - * - * And we update in-core i_size and truncate page cache beyond newsize - * before writeback the [di_size, newsize] range, so we're guaranteed - * not to write stale data past the new EOF on truncate down. - */ - truncate_setsize(inode, newsize); + /* + * Updating i_size after writing back to make sure the zeroed + * blocks could been written out, and drop all the page cache + * range that beyond blocksize aligned new EOF block. + * + * We've already locked out new page faults, so now we can + * safely remove pages from the page cache knowing they won't + * get refaulted until we drop the XFS_MMAP_EXCL lock after the + * extent manipulations are complete. + */ + i_size_write(inode, newsize); + truncate_pagecache(inode, roundup_64(newsize, blocksize)); + } else { + /* + * Start with zeroing any data beyond EOF that we may expose on + * file extension. + */ + if (newsize > oldsize) { + trace_xfs_zero_eof(ip, oldsize, newsize - oldsize); + error = iomap_zero_range(inode, oldsize, newsize - oldsize, + &did_zeroing, &xfs_buffered_write_iomap_ops); + if (error) + return error; + }
- /* - * We are going to log the inode size change in this transaction so - * any previous writes that are beyond the on disk EOF and the new - * EOF that have not been written out need to be written here. If we - * do not write the data out, we expose ourselves to the null files - * problem. Note that this includes any block zeroing we did above; - * otherwise those blocks may not be zeroed after a crash. - */ - if (did_zeroing || - (newsize > ip->i_d.di_size && oldsize != ip->i_d.di_size)) { - error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, - ip->i_d.di_size, newsize - 1); - if (error) - return error; + /* + * The truncate_setsize() call also cleans partial EOF page + * PTEs on extending truncates and hence ensures sub-page block + * size filesystems are correctly handled, too. + */ + truncate_setsize(inode, newsize); + + if (did_zeroing || write_back) { + error = filemap_write_and_wait_range(inode->i_mapping, + ip->i_d.di_size, newsize - 1); + if (error) + return error; + } }
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate, 0, 0, 0, &tp);
From: Zhang Yi yi.zhang@huawei.com
maillist inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I9VTE3 CVE: NA
Reference: https://lore.kernel.org/linux-fsdevel/20240529095206.2568162-1-yi.zhang@huaw...
--------------------------------
When unaligned truncating down a realtime file which sb_rextsize is bigger than one block, xfs_truncate_page() only zeros out the tail EOF block, this could expose stale data since commit '943bc0882ceb ("iomap: don't increase i_size if it's not a write operation")'.
If we truncate file that contains a large enough written extent:
|< rxext >|< rtext >| ...WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW ^ (new EOF) ^ old EOF
Since we only zeros out the tail of the EOF block, and xfs_itruncate_extents() unmap the whole ailgned extents, it becomes this state:
|< rxext >| ...WWWzWWWWWWWWWWWWW ^ new EOF
Then if we do an extending write like this, the blocks in the previous tail extent becomes stale:
|< rxext >| ...WWWzSSSSSSSSSSSSS..........WWWWWWWWWWWWWWWWW ^ old EOF ^ append start ^ new EOF
Fix this by zeroing out the tail allocation uint and also make sure xfs_itruncate_extents() unmap rtextsize aligned extents.
Fixes: 943bc0882ceb ("iomap: don't increase i_size if it's not a write operation") Reported-by: Chandan Babu R chandanbabu@kernel.org Link: https://lore.kernel.org/linux-xfs/0b92a215-9d9b-3788-4504-a520778953c2@huawe... Signed-off-by: Zhang Yi yi.zhang@huawei.com Signed-off-by: Long Li leo.lilong@huawei.com --- fs/xfs/xfs_inode.c | 13 +++++++++++++ fs/xfs/xfs_inode.h | 1 + fs/xfs/xfs_iops.c | 2 +- 3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index 5fafc8f419cc..b7c1d3731747 100644 --- a/fs/xfs/xfs_inode.c +++ b/fs/xfs/xfs_inode.c @@ -3985,3 +3985,16 @@ xfs_iunlock2_io_mmap( if (!same_inode) inode_unlock(VFS_I(ip1)); } + +/* Returns the size of fundamental allocation unit for a file, in bytes. */ +unsigned int +xfs_inode_alloc_unitsize( + struct xfs_inode *ip) +{ + unsigned int blocks = 1; + + if (XFS_IS_REALTIME_INODE(ip)) + blocks = ip->i_mount->m_sb.sb_rextsize; + + return XFS_FSB_TO_B(ip->i_mount, blocks); +} diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index b5b97be319e6..818f7622d851 100644 --- a/fs/xfs/xfs_inode.h +++ b/fs/xfs/xfs_inode.h @@ -571,5 +571,6 @@ void xfs_end_io(struct work_struct *work);
int xfs_ilock2_io_mmap(struct xfs_inode *ip1, struct xfs_inode *ip2); void xfs_iunlock2_io_mmap(struct xfs_inode *ip1, struct xfs_inode *ip2); +unsigned int xfs_inode_alloc_unitsize(struct xfs_inode *ip);
#endif /* __XFS_INODE_H__ */ diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c index 673f066d3ad4..97569fb5f196 100644 --- a/fs/xfs/xfs_iops.c +++ b/fs/xfs/xfs_iops.c @@ -848,7 +848,7 @@ xfs_setattr_size(
write_back = newsize > ip->i_d.di_size && oldsize != ip->i_d.di_size; if (newsize < oldsize) { - unsigned int blocksize = i_blocksize(inode); + unsigned int blocksize = xfs_inode_alloc_unitsize(ip);
/* * iomap won't detect a dirty page over an unwritten block (or a