Notice
This document is for a development version of Ceph.
cephfs-tool
cephfs-tool is a standalone C++ utility designed to interact directly with
libcephfs. The initial implementation focuses on a bench command to
measure library performance. This allows developers and administrators to
benchmark the userspace library isolated from FUSE or kernel client overhead.
Key features include:
Multi-threaded Read/Write throughput benchmarking.
Configurable block sizes, file counts, and fsync intervals.
Detailed statistical reporting (Mean, Std Dev, Min/Max) for throughput and IOPS.
Support for specific CephFS user/group impersonation (UID/GID) via
ceph_mount_perms_set.
Building
The tool can be built outside of the Ceph source tree:
g++ --std=c++20 -D_FILE_OFFSET_BITS=64 -O3 -o cephfs-tool cephfs-tool.cc -lcephfs -lpthread -lboost_program_options
Usage
cephfs-tool [general-options] <command> [command-options]
Commands
- bench
Run IO benchmark
Options
General Options
- -h, --help
Produce help message
- -c, --conf <path>
Ceph config file path
- -i, --id <id>
Client ID (default:
admin)
- -k, --keyring <path>
Path to keyring file
- --filesystem <name>
CephFS filesystem name to mount
- --uid <uid>
User ID to mount as (default:
-1)
- --gid <gid>
Group ID to mount as (default:
-1)
Benchmark Options
These options are used with the bench command.
- --threads <n>
Number of threads (default:
1)
- --iterations <n>
Number of iterations (default:
1)
- --files <n>
Total number of files (default:
100)
- --size <size>
File size (e.g. 4MB, 0 for creates only) (default:
4MB)
- --block-size <size>
IO block size (e.g. 1MB) (default:
4MB)
- --fsync-every <size>
Call fsync every N bytes (default:
0)
- --prefix <str>
Filename prefix (default:
benchmark_)
- --dir-prefix <str>
Directory prefix (default:
bench_run_)
- --root-path <path>
Root path in CephFS (default:
/)
- --per-thread-mount
Use separate mount per thread
- --no-cleanup
Disable cleanup of files
Examples
Benchmark throughput with 8 threads:
env CEPH_ARGS="--log-to-stderr=false --log-to-file=false --log-file=/tmp/bench.log" \
./cephfs-tool -c ~/ceph.conf -k ~/keyring -i scratch --filesystem scratch \
bench --root-path=/pdonnell --files 256 --size=$(( 128 * 2 ** 20 )) \
--threads=8 --iterations 3
Output:
Benchmark Configuration:
Threads: 8 | Iterations: 3
Files: 256 | Size: 134217728
Filesystem: scratch
Root: /pdonnell
Subdirectory: bench_run_d942
UID: -1
GID: -1
--- Iteration 1 of 3 ---
Starting Write Phase...
Write: 2761.97 MB/s, 21.5779 files/s (11.864s)
Starting Read Phase...
Read: 2684.36 MB/s, 20.9716 files/s (12.207s)
...
*** Final Report ***
Write Throughput Statistics (3 runs):
Mean: 2727.06 MB/s
Std Dev: 26.2954 MB/s
Min: 2698.51 MB/s
Max: 2761.97 MB/s
Read Throughput Statistics (3 runs):
Mean: 2687.24 MB/s
Std Dev: 5.68904 MB/s
Min: 2682.16 MB/s
Max: 2695.18 MB/s
File Creates Statistics (3 runs):
Mean: 21.3051 files/s
Std Dev: 0.205433 files/s
Min: 21.0821 files/s
Max: 21.5779 files/s
File Reads (Opens) Statistics (3 runs):
Mean: 20.994 files/s
Std Dev: 0.0444456 files/s
Min: 20.9544 files/s
Max: 21.0561 files/s
Cleaning up...
Benchmark file creation performance (size 0):
env CEPH_ARGS="--log-to-stderr=false --log-to-file=false --log-file=/tmp/bench.log" \
./cephfs-tool -c ~/ceph.conf -k ~/keyring -i scratch --filesystem scratch \
bench --root-path=/pdonnell --files=$(( 2 ** 16 )) --size=0 \
--threads=8 --iterations 3
Output:
Benchmark Configuration:
Threads: 8 | Iterations: 3
Files: 65536 | Size: 0
...
*** Final Report ***
File Creates Statistics (3 runs):
Mean: 4001.86 files/s
Std Dev: 125.337 files/s
Min: 3863.7 files/s
Max: 4167.1 files/s
File Reads (Opens) Statistics (3 runs):
Mean: 14382.3 files/s
Std Dev: 556.594 files/s
Min: 13636.3 files/s
Max: 14972.8 files/s
Cleaning up...
Brought to you by the Ceph Foundation
The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.