Package Details: zfs-linux-git 0.7.0_rc4_r24_g4358afa0f_4.11.2_1-1

Git Clone URL: https://aur.archlinux.org/zfs-linux-git.git (read-only)
Package Base: zfs-linux-git
Description: Kernel modules for the Zettabyte File System.
Upstream URL: http://zfsonlinux.org/
Licenses: CDDL
Groups: archzfs-linux-git
Conflicts: zfs-linux, zfs-linux-lts
Provides: zfs
Submitter: demizer
Maintainer: demizer
Last Packager: demizer
Votes: 14
Popularity: 2.679218
First Submitted: 2016-04-21 08:46
Last Updated: 2017-05-23 17:54

Pinned Comments

demizer commented on 2017-02-11 18:20

Hello everyone. Packages have not been updated in a while because I am working to switch them over to use "extramodules" so that they only need to be updated once for a major version. See https://github.com/archzfs/archzfs/issues/92

Latest Comments

demizer commented on 2017-02-11 18:20

Hello everyone. Packages have not been updated in a while because I am working to switch them over to use "extramodules" so that they only need to be updated once for a major version. See https://github.com/archzfs/archzfs/issues/92

wolfdogg commented on 2017-01-29 20:47

following up on this Zenju the author of freefilesync has corroborated that my suspicions were correct http://www.freefilesync.org/forum/viewtopic.php?f=2&t=1650&p=13710#p13710, the flag is on somehow on the zfs filesystem possibly, or maybe missing one, and these errors persist. This is NOT the ARCH pkg in anyway. Im using RHEL64 7.3 to run the san which uses zfsonlinux/zfs https://github.com/zfsonlinux/zfs/ using the kABI-tracking kmod (so that i dont have to futz with improper kernel updates anymore).

So with that said, if anybody knows how to tweak the zfs properties/settings as such so that SPARSE is set OFF, im all ears. Thats where my research is going to next.

Im not sure what i might have done, this is a standard raidz2, and havent had luck with any other pool testing yet. Is this a bug, or not a bug?

@demizer Any ideas?

bazzawill commented on 2017-01-29 04:04

I am having trouble upgrading my system and this package as I have a conflict upgrading linux with the old package or a conflict upgrading the package with older linux 4.8.13-1-ARCH I can use pacman -Syudd but this is illadvised.

wolfdogg commented on 2016-10-27 07:32

Info: Cannot write file attributes of "\\server\pool\path\filex.ffs_tmp".
Error Code 1: Incorrect function. (DeviceIoControl, FSCTL_SET_SPARSE)

Error message above sheds light that the zfs since i updated yesterday, or very recently atleast; im seeing attrib errors when using freefilesync to push to the ZFS san. I feel as if either myself, or automaticaly, sparse files or somefeature of, has been flagged to be enabled on the zfs array, where it shouldnt be possibly, so when freefilesync goes to write to it, its flagged, but not available, is an upsupported feature, OR there may be something missing in the drivers, not sure.

ref:
zfs sparse files https://blogs.oracle.com/bonwick/entry/seek_hole_and_seek_data
FSCTL_SET_SPARSE https://msdn.microsoft.com/en-us/library/windows/desktop/aa364596(v=vs.85).aspx

Edit: topic made here https://bbs.archlinux.org/viewtopic.php?pid=1671430#p1671430

spuggy commented on 2016-08-18 22:42

@larsko - you're a star! My ZFS volumes haven't mounted in months; there's even a bug in the git tracker for it. Thought I was going to have to import and manually start Gluster on reboots for evermore, until I tried your suggestion. Works a treat - Thanks!

@jerome2016, if you switched to zfs-dkms, that's a different maintainer, and those have their own pages.

I found the regular zfs packages too old for my taste - but keeping kernel/zfs-git modules in sync was a little too manual/awkward until I added the archzfs repo that demizer thoughtfully provides; that makes it a snap.

@demizer; you rock - thanks!!!

jerome2016 commented on 2016-08-04 00:50

again, i can not install/update zfs.
please... i would like to use zfs again, is it possible to have it running and stable ? I do install it before (some month ago) and at update time, many times, something failed. I try to use zfs for backup system, but seems to be a non-stable choice because of unstable package maintain.

this time the problem is:
https://gist.github.com/jerome-diver/051e32ee1a29c8464843eb303f496cb2

The solution in my case is to remove all of these packages (zfs-linux-git, zfs-utils-linux-git and spl related ones) and install zfs-dkms packages.
This way was the only one way for me to see zfs working back.

predmijat commented on 2016-06-10 10:35

I moved to zfs-linux (without -git), and solved this. More details at https://bbs.archlinux.org/viewtopic.php?pid=1633217#p1633217

predmijat commented on 2016-06-10 06:03

No, that didn't help.

In the meantime, update for 4.6.2 came along, but that didn't solve anything.

I exported the pool again, imported it (had to use -f, don't know why because I didn't use -f to export it), rebooted, ZFS didn't start.

# systemctl status zfs.target
● zfs.target - ZFS startup target
Loaded: loaded (/usr/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
Active: active since Fri 2016-06-10 07:56:58 CEST; 24s ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2016-06-10 07:56:58 CEST; 29s ago
Process: 244 ExecStart=/usr/bin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 244 (code=exited, status=1/FAILURE)

Jun 10 07:56:59 hqr-workstation zfs[244]: The ZFS modules are not loaded.
Jun 10 07:56:59 hqr-workstation zfs[244]: Try running '/sbin/modprobe zfs' as root to load them.

At that point I have to:

1. remove /storage directory (mountpoint for my pool. Nothing in the directory at this point, created by some service I tweaked to use the pool)
2. run "systemctl start zfs-import-cache.service
3. run "zfs mount -a"

Until the next reboot...

demizer commented on 2016-06-10 05:27

@predmijat, you should try exporting your pools and reimporting before reboot. This could be related to some hostid issues we are working through. I have not had much time to focus on these packages for testing lately. Sorry about that!

larsko commented on 2016-06-09 21:43

@predmijat

Did you try importing and then exporting the pool before reboot?

All comments