Package Details: zfs-linux-git 0.7.0_rc2_r44_g251cb8d_4.8.11_1-1

Git Clone URL: https://aur.archlinux.org/zfs-linux-git.git (read-only)
Package Base: zfs-linux-git
Description: Kernel modules for the Zettabyte File System.
Upstream URL: http://zfsonlinux.org/
Licenses: CDDL
Groups: archzfs-linux-git
Conflicts: zfs-linux, zfs-linux-lts
Provides: zfs
Replaces: zfs-git
Submitter: demizer
Maintainer: demizer
Last Packager: demizer
Votes: 7
Popularity: 1.279262
First Submitted: 2016-04-21 08:46
Last Updated: 2016-11-29 09:13

Latest Comments

wolfdogg commented on 2016-10-27 07:32

Info: Cannot write file attributes of "\\server\pool\path\filex.ffs_tmp".
Error Code 1: Incorrect function. (DeviceIoControl, FSCTL_SET_SPARSE)

Error message above sheds light that the zfs since i updated yesterday, or very recently atleast; im seeing attrib errors when using freefilesync to push to the ZFS san. I feel as if either myself, or automaticaly, sparse files or somefeature of, has been flagged to be enabled on the zfs array, where it shouldnt be possibly, so when freefilesync goes to write to it, its flagged, but not available, is an upsupported feature, OR there may be something missing in the drivers, not sure.

ref:
zfs sparse files https://blogs.oracle.com/bonwick/entry/seek_hole_and_seek_data
FSCTL_SET_SPARSE https://msdn.microsoft.com/en-us/library/windows/desktop/aa364596(v=vs.85).aspx

Edit: topic made here https://bbs.archlinux.org/viewtopic.php?pid=1671430#p1671430

spuggy commented on 2016-08-18 22:42

@larsko - you're a star! My ZFS volumes haven't mounted in months; there's even a bug in the git tracker for it. Thought I was going to have to import and manually start Gluster on reboots for evermore, until I tried your suggestion. Works a treat - Thanks!

@jerome2016, if you switched to zfs-dkms, that's a different maintainer, and those have their own pages.

I found the regular zfs packages too old for my taste - but keeping kernel/zfs-git modules in sync was a little too manual/awkward until I added the archzfs repo that demizer thoughtfully provides; that makes it a snap.

@demizer; you rock - thanks!!!

jerome2016 commented on 2016-08-04 00:50

again, i can not install/update zfs.
please... i would like to use zfs again, is it possible to have it running and stable ? I do install it before (some month ago) and at update time, many times, something failed. I try to use zfs for backup system, but seems to be a non-stable choice because of unstable package maintain.

this time the problem is:
https://gist.github.com/jerome-diver/051e32ee1a29c8464843eb303f496cb2

The solution in my case is to remove all of these packages (zfs-linux-git, zfs-utils-linux-git and spl related ones) and install zfs-dkms packages.
This way was the only one way for me to see zfs working back.

predmijat commented on 2016-06-10 10:35

I moved to zfs-linux (without -git), and solved this. More details at https://bbs.archlinux.org/viewtopic.php?pid=1633217#p1633217

predmijat commented on 2016-06-10 06:03

No, that didn't help.

In the meantime, update for 4.6.2 came along, but that didn't solve anything.

I exported the pool again, imported it (had to use -f, don't know why because I didn't use -f to export it), rebooted, ZFS didn't start.

# systemctl status zfs.target
● zfs.target - ZFS startup target
Loaded: loaded (/usr/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
Active: active since Fri 2016-06-10 07:56:58 CEST; 24s ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2016-06-10 07:56:58 CEST; 29s ago
Process: 244 ExecStart=/usr/bin/zfs mount -a (code=exited, status=1/FAILURE)
Main PID: 244 (code=exited, status=1/FAILURE)

Jun 10 07:56:59 hqr-workstation zfs[244]: The ZFS modules are not loaded.
Jun 10 07:56:59 hqr-workstation zfs[244]: Try running '/sbin/modprobe zfs' as root to load them.

At that point I have to:

1. remove /storage directory (mountpoint for my pool. Nothing in the directory at this point, created by some service I tweaked to use the pool)
2. run "systemctl start zfs-import-cache.service
3. run "zfs mount -a"

Until the next reboot...

demizer commented on 2016-06-10 05:27

@predmijat, you should try exporting your pools and reimporting before reboot. This could be related to some hostid issues we are working through. I have not had much time to focus on these packages for testing lately. Sorry about that!

larsko commented on 2016-06-09 21:43

@predmijat

Did you try importing and then exporting the pool before reboot?

predmijat commented on 2016-06-09 20:23

@larsko
Didn't solve anything for me...

larsko commented on 2016-06-09 20:14

Enabling the services again with systemctl (in particular zfs-mount, systemctl enable zfs-mount.service) fixed this for me and works after rebooting as well. It looks like the names changed and therefore the necessary services aren't started anymore?

Enoid commented on 2016-06-09 19:21

Yes, I have the same issue as you. Your fix works too, but has to be run after each reboot.

All comments