Package Details: ceph 18.2.4-4

Git Clone URL: https://aur.archlinux.org/ceph.git (read-only, click to copy)
Package Base: ceph
Description: Ceph Storage full install [VIRTUAL]
Upstream URL: https://ceph.com/
Licenses: GPL-2.0-or-later OR LGPL-2.1-or-later OR LGPL-3.0-or-later
Submitter: foxxx0
Maintainer: pbazaah
Last Packager: pbazaah
Votes: 7
Popularity: 0.46
First Submitted: 2022-08-08 09:09 (UTC)
Last Updated: 2024-12-01 16:03 (UTC)

Sources (29)

Pinned Comments

pbazaah commented on 2022-10-05 13:03 (UTC) (edited on 2022-10-05 13:03 (UTC) by pbazaah)

For future commenters:

TLDR:

https://aur.archlinux.org/pkgbase/ceph | From source build (slow)

https://aur.archlinux.org/pkgbase/ceph-bin | Pre-built binaries (fast)


Unlike the original community version, this repo builds ceph from source. Ceph is a large, complicated project so this takes several hours on a good build server.

To get a similar experience to how community/ceph worked (pre-built binaries) use ceph-bin instead.

Latest Comments

« First ‹ Previous 1 .. 6 7 8 9 10 11

phaseburn commented on 2022-09-20 16:53 (UTC)

I'd love to see version-specific packages like ceph-octopus and ceph-pacific as @petronny suggested.

I'm also hopeful that there will be a libvirt-storage-rbd AUR package to make use of ceph from libvirt, as the official package for that also disappeared due to the policy against official packages being dependent on AUR packages.

petronny commented on 2022-09-20 12:37 (UTC)

Also, the build fails with the current PKGBUILD:

In file included from /usr/include/boost/asio/io_context.hpp:22,^M
                 from /build/ceph/src/ceph-15.2.14/src/common/async/yield_context.h:19,^M
                 from /build/ceph/src/ceph-15.2.14/src/rgw/rgw_dmclock_scheduler.h:21,^M
                 from /build/ceph/src/ceph-15.2.14/src/rgw/rgw_dmclock_sync_scheduler.h:18,^M
                 from /build/ceph/src/ceph-15.2.14/src/test/rgw/test_rgw_dmclock_scheduler.cc:17:^M
/usr/include/boost/asio/async_result.hpp: In instantiation of ‘struct boost::asio::async_completion<boost::asio::basic_yield_context<boost::asio::executor>, void(boost::system::error_code, crimson::dmclock::PhaseType)>’:^M
/build/ceph/src/ceph-15.2.14/src/rgw/rgw_dmclock_async_scheduler.h:132:61:   required from ‘auto rgw::dmclock::AsyncScheduler::async_request(const rgw::dmclock::client_id&, const crimson::dmclock::ReqParams&, const crimson::dmclock::Time&, crimson::dmclock::Cost, CompletionToken&&) [with CompletionToken = boost::asio::basic_yield_context<boost::asio::executor>; crimson::dmclock::Time = double; crimson::dmclock::Cost = unsigned int]’^M
/build/ceph/src/ceph-15.2.14/src/test/rgw/test_rgw_dmclock_scheduler.cc:418:34:   required from here^M
/usr/include/boost/asio/async_result.hpp:651:9: error: no type named ‘completion_handler_type’ in ‘class boost::asio::async_result<boost::asio::basic_yield_context<boost::asio::executor>, void(boost::system::error_code, crimson::dmclock::PhaseType)>’^M
  651 |         completion_handler_type;^M
      |         ^~~~~~~~~~~~~~~~~~~~~~~^M
/usr/include/boost/asio/async_result.hpp:684:62: error: no type named ‘completion_handler_type’ in ‘class boost::asio::async_result<boost::asio::basic_yield_context<boost::asio::executor>, void(boost::system::error_code, crimson::dmclock::PhaseType)>’^M
  684 |     completion_handler_type&, completion_handler_type>::type completion_handler;^M
      |                                                              ^~~~~~~~~~~~~~~~~~^M

Full build log can be downloaded from https://github.com/arch4edu/cactus/actions/runs/3089392900

petronny commented on 2022-09-20 10:48 (UTC)

Hi, thank you for maintaining this package.

I have a suggestion which is creating packages like ceph-octopus to provide ceph=15 and ceph-pacific to provide ceph=16 and so on. And ceph always provides the latest.

Then users can install the version same to their server.

pbazaah commented on 2022-08-14 19:29 (UTC) (edited on 2022-08-14 19:30 (UTC) by pbazaah)

So here's summary of the work I've done so far to handle the move out of community for ceph.

  1. Updated the PKGBUILD here to build a functional ceph package for 15.2.14
  2. Created the ceph-bin{0} package which uses the artifacts produced by this build

Here's what still needs to be done:

  1. Push the latest changes to aur.archlinux.org -- blocked until community/ceph is removed
  2. Update package to use a non EOL ceph version -- In progress, have a 16.2.7 build mostly functional, can find the work here {1}, and need to investigate the 17.2.x PKGBUILD someone linked me.

Unlike official packages which tend to be binary only, AUR prefers the 'build-it-yourself' approach. Unfortunately for something as big as ceph, that typically means spending 40-50 minutes compiling (assuming 16+ cores) per upgrade, which simply isn't feasible.

So, to keep the ease of prebuilt packages I've created a sister package, 'ceph-bin' which consumes the binary products of this package.

So TLDR:

  • packages/ceph: from source build
  • packages/ceph-bin: from prebuilt binaries

I'll leave the decision of which to use up to you, but if you want a similar experience to the previous, official packages, pick ceph-bin.

{0} https://aur.archlinux.org/packages/ceph-bin

{1} https://github.com/bazaah/aur-ceph/tree/feature/16-2-7_1

pbazaah commented on 2022-08-09 15:52 (UTC)

Update: I have successfully built a 15.2.14_7 that is functional, but are blocked because community/ceph still exists, and the AUR git server therefore refuses the fast forward.

This is probably okay, as I think I'll need to add a ceph-xxx-bin series of packages that use the artifacts produced by this package's build. This keeps inline with AUR guidelines around package naming, and also keeps the previous utility of being able to just install the built binaries (without paying for a 1:30h build on a 12 core machine)

pbazaah commented on 2022-08-08 09:46 (UTC)

So some history for the future:

See this bug report: https://bugs.archlinux.org/task/73335 for how this package came to be.


Currently, (as of 08/08/2022) this package is broken and should not be used.

I'm hopeful that I'll be able to get something working this weekend -- likely just a rebuild of 15.2.14 that works.

I'll also make my 16.2.x work available somewhere, so that others see the progress.