Package Details: anki 24.04.1-3

Git Clone URL: https://aur.archlinux.org/anki.git (read-only, click to copy)
Package Base: anki
Description: Helps you remember facts (like words/phrases in a foreign language) efficiently
Upstream URL: https://apps.ankiweb.net/
Keywords: anki languages learning vocabulary
Licenses: AGPL3
Conflicts: anki-bin, anki-git, anki-qt5
Submitter: demize
Maintainer: AlexBocken
Last Packager: AlexBocken
Votes: 160
Popularity: 8.10
First Submitted: 2021-09-17 22:31 (UTC)
Last Updated: 2024-05-25 10:44 (UTC)

Latest Comments

« First ‹ Previous 1 2 3 4 5 6 7 8 .. 29 Next › Last »

Spixmaster commented on 2024-03-31 10:02 (UTC) (edited on 2024-03-31 10:02 (UTC) by Spixmaster)

I can confirm what @patenteng described. I tried to compile it too again with version 23.12.1-3 but the rustc process was killed again at arount 6 GiB. This time I did not use the TTY.

sohailhmmyes commented on 2024-03-26 14:29 (UTC)

Works fine now.

patenteng commented on 2024-03-25 14:52 (UTC)

@AlexBocken I recompiled, but I don't think it made much difference. It peaked at 8.3 GiB + 2 GiB for the rest of the OS with KDE only having closed all other applications. In my comment below I meant it used 10+ GiB in total.

AlexBocken commented on 2024-03-25 09:33 (UTC) (edited on 2024-03-25 09:34 (UTC) by AlexBocken)

@Spixmaster thanks for the detailed triage! The problem is most likely the new method for LTO which uses fat LTO objects, requiring more RAM. From personal testing with clang, gcc and mold (the old way) I could not find a large difference in RAM usage personally. All were around ~2.5 GiB at the worst moments. So what you're experiencing might be specific to some machines.

Nonetheless I've returned to the old way of doing LTO using mold in -3 as that appears to be more reliable across machines.

I've switched in -2 to using Fat LTO objects since that is the recommended way to fix this issue (see arch gitlab link in comments in PKGBUILD). For people who prefer their linker of choice, this method is still available by manually uncommenting the corresponding lines. The default is now again mold to allow for maximum compatibility.

Spixmaster commented on 2024-03-25 08:17 (UTC) (edited on 2024-03-25 08:18 (UTC) by Spixmaster)

@patenteng @AlexBocken I further investigated the issue. For that, I used the TTY to minimise RAM usage.

From the fact that I was able to compile it the last time I considered what I changed since then which was that I set up sccache. I disabled it then this second time. It did not change the outcome noticably. My laptop has 7.64 GiB of RAM. The rustc process was killed at around 6.7 GiB of memory usage both times. According to @patenteng, it takes more than 10 GiB. I do not know what happened since last time I compiled because then I was able to do so. Maybe a change from QT5 to QT6.

Nevertheless, I will go with anki-bin.

patenteng commented on 2024-03-25 06:46 (UTC)

RAM use when compiling is very high. It went to 10+ GB on my system.

Spixmaster commented on 2024-03-24 21:28 (UTC) (edited on 2024-03-24 21:28 (UTC) by Spixmaster)

@AlexBocken I monitored the RAM usage particularly know. You seem to be right and I hope that that is the reason. Thank you very much!

AlexBocken commented on 2024-03-24 20:08 (UTC) (edited on 2024-03-24 22:06 (UTC) by AlexBocken)

@Spixmaster Thanks for the report! sccache: Compile terminated by signal 9 could indicate that this could be a RAM issue. Could you verify that you still have available memory when this signal is sent?

If not so, consider reducing the parallel compile threads in your /etc/makepkg.conf (MAKEFLAGS=-j<number-of_threads). Of course maybe also try closing memory-hogging applications like a browser during compilation. Using a swap partition could maybe also solve this.

Spixmaster commented on 2024-03-24 17:13 (UTC) (edited on 2024-03-24 17:17 (UTC) by Spixmaster)

The package does not build for me. I tried it twice with the second time with cleaned cache, rm -r .cache/paru/clone/anki/.

[63/65; 1 active; 657.560s] /home/matheus/.cache/paru/clone/anki/src/anki-23.12.1/out/rust/release/runner run cargo build --profile release-lto  --locked -p rsbridge --features rustls
FAILED: /home/matheus/.cache/paru/clone/anki/src/anki-23.12.1/out/rust/release-lto/librsbridge.so
/home/matheus/.cache/paru/clone/anki/src/anki-23.12.1/out/rust/release/runner run cargo build --profile release-lto  --locked -p rsbridge --features rustls
   Compiling syn v2.0.39
   Compiling libc v0.2.150
   Compiling autocfg v1.1.0
   Compiling serde v1.0.193
   Compiling version_check v0.9.4
   Compiling scopeguard v1.2.0
   Compiling once_cell v1.18.0
...
   Compiling rustls-webpki v0.101.7
   Compiling tokio-rustls v0.24.1
   Compiling hyper-rustls v0.24.2
   Compiling reqwest v0.11.22
warning: field `0` is never read
  --> rslib/src/config/bool.rs:57:79
   |
57 | struct BoolLike(#[serde(deserialize_with = "deserialize_bool_from_anything")] bool);
   |        -------- field in this struct                                          ^^^^
   |
   = note: `#[warn(dead_code)]` on by default
help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field
   |
57 | struct BoolLike(#[serde(deserialize_with = "deserialize_bool_from_anything")] ());
   |                                                                               ~~

warning: `anki` (lib) generated 1 warning
sccache: Compile terminated by signal 9
error: could not compile `rsbridge` (lib)
Failed with code Some(101): cargo build --profile release-lto --locked -p rsbridge --features rustls
ninja: build stopped: subcommand failed.

Build failed.
==> FEHLER: Ein Fehler geschah in build().
    Breche ab...
Fehler: ‚anki-23.12.1-2‘ konnte nicht erstellt werden:

AlexBocken commented on 2024-03-24 09:34 (UTC)

Thanks for the feedback. I've bumped the pkgrel to ensure this will be fine for others as well.

Finally figured out how to get rid of mold and still have LTO. This has also been changed. If anki does not build in your environment anymore please comment here or shoot me an email.