Just another FYI
https://tracker.ceph.com/issues/74156
Non-LTS kerenels will leak folios when using the in kernel cephfs driver.
So, try to keep your cephfs clients on a pre 6.15 kernel
| Git Clone URL: | https://aur.archlinux.org/ceph.git (read-only, click to copy) |
|---|---|
| Package Base: | ceph |
| Description: | Ceph Storage client library for CephFS, a distributed POSIX filesystem |
| Upstream URL: | https://ceph.com/ |
| Licenses: | GPL-2.0-or-later, LGPL-2.1-or-later, LGPL-3.0-or-later |
| Provides: | libcephfs.so |
| Submitter: | foxxx0 |
| Maintainer: | pbazaah |
| Last Packager: | pbazaah |
| Votes: | 8 |
| Popularity: | 0.012195 |
| First Submitted: | 2022-08-08 09:09 (UTC) |
| Last Updated: | 2025-10-24 09:41 (UTC) |
Just another FYI
https://tracker.ceph.com/issues/74156
Non-LTS kerenels will leak folios when using the in kernel cephfs driver.
So, try to keep your cephfs clients on a pre 6.15 kernel
Just a heads up to Ceph users, there is currently a data corruption bug in EC pools if you have set allow_ec_overwrites to true on the pool.
Thanks to https://github.com/insanemal for bringing this to my attention.
You can follow along with the upstream bug tracker here: https://tracker.ceph.com/issues/70390
or this package's issue here: https://github.com/bazaah/aur-ceph/issues/34
To address any new corruption you can...
ceph config get osd bluestore_elastic_shared_blobs is false (default is true)or
allow_ec_overwrites set to true (default is false)Changing either of these will not affect current data corruption, and so far the only known fix is to redeploy the OSD after making of the above changes
Thanks for the kind words :)
I was expecting v20 to already be released, but something seems to be holding it up, and we're getting pretty close to the holiday season, so it may only happen next year, now.
@pbazaah Thanks again for all your work. I've got the ISA erasure coding plugin working.
It's fantastic. Thanks for your ongoing work. I look forward to 20.x releases which apparently bring huge speed ups to erasure coded pools.
Looks like a change from boost 1.89: https://github.com/boostorg/system/issues/132#issuecomment-3146378680
I'll get a pkgrel out soonish to fix this
Hi, with an Arch system up to date as of writing this comment I get this error when building
CMake Error at /usr/lib/cmake/Boost-1.89.0/BoostConfig.cmake:141 (find_package): Could not find a package configuration file provided by "boost_system" (requested version 1.89.0) with any of the following names:
boost_systemConfig.cmake
boost_system-config.cmake```
Add the installation prefix of "boost_system" to CMAKE_PREFIX_PATH or set "boost_system_DIR" to a directory containing one of the above files. If "boost_system" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): /usr/lib/cmake/Boost-1.89.0/BoostConfig.cmake:262 (boost_find_component) cmake/modules/FindBoost.cmake:598 (find_package) CMakeLists.txt:709 (find_package)
-- Configuring incomplete, errors occurred! ==> ERROR: A failure occurred in build(). Aborting...
As a heads up the release of Ceph v20 is nigh, but I normally hold off on packaging it for a while, to give time for any early bug fixes to make their way in to the release, so I don't expect to release v20 here until next month at the earliest.
I will be releasing a -2 of 19.2.3 which finally has a fix for the mgr dashboard, backported from the mainline upstream. Yay.
Couple things to note however:
ceph mgr module disable restfulpython-routes (and probably other python packages) for the dashboard to work. I'll be adding this as a optional dep to the ceph-mgr package, and I probably will look at splitting up the mgr modules a bit more, so I can specify the exact deps each need, but that's not happening for now.mgr modules cephadm and diskprediction_local also remain broken for the moment, but cephadm should be fixed when v20 releases, and diskprediction_local looks deprecated by the upstream anyway.
v19.2.3-1 has been released.
There's no notable changes beyond a few feature backports from the main branch, and a full fix for the data loss bug in RGW -- v19.2.2 fixed it in practice, but it was still possible under some uncommon circumstances.
This also rebuilds ceph for glibc 2.42
They have a PR for the Boost stuff:
I got my first successful build just now. I'm just sidestepping the boost::process stuff for now, I don't trust myself to make the kind of changes needed for v1->v2. The upstream should notice it eventually and my workaround will get much easier to maintain once https://github.com/boostorg/process/issues/480#issuecomment-2797215531 (hopefully) lands.
Pinned Comments
pbazaah commented on 2022-10-05 13:03 (UTC) (edited on 2022-10-05 13:03 (UTC) by pbazaah)
For future commenters:
TLDR:
https://aur.archlinux.org/pkgbase/ceph | From source build (slow)
https://aur.archlinux.org/pkgbase/ceph-bin | Pre-built binaries (fast)
Unlike the original community version, this repo builds ceph from source. Ceph is a large, complicated project so this takes several hours on a good build server.
To get a similar experience to how community/ceph worked (pre-built binaries) use ceph-bin instead.