-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kick-off #1
Comments
Adding myself. |
Thank you @s-m-e |
Hello, I'm looking for alternatives for FUSE Python bindings supporting FUSE 3. So, as this issue is continuing the one in fusepy, what is the project status? It looks like it has fizzled out shortly after it has been forked? |
I got stuck at (CI) testing, hence so little progress. I did not want to make major changes without the ability to just test them against a test suit of some sort across all supported operating systems, see #13. I can test for Linux via my own filesystem - passing the pjdfstest test-suite, a descendant of FreeBSD's fstest for POSIX compliance as well as stress tests with fsx-linux based on the fsx-flavor released by the Linux Test Project - but any other operating system became tricky. Just making the tests work for Linux told me a lot about edge cases that needed to be covered - yet another point for having proper tests on the other operating systems as well. With the CI tools at the time, nobody was able to properly run FreeBSD. OS X was possible but a pain and Windows ... Anyway, the CI landscape has changed since then and it appears to be worth looking into it again. I have long been considering to reevaluate the situation and I'd appreciative any help. |
For Windows CI I use AppVeyor or GitHub. Both will work for your purposes. (For advanced Windows testing I prefer AppVeyor, because I have far more control over the test environment. For example, for WinFsp I need to disable test-signing, run the driver verifier, etc. and I can only control such options in AppVeyor.) |
I'd be willing to help. CI pipelines are always a difficult topic though. I do have Windows and macOS test pipelines for ratarmount, which also execute integration tests by actually FUSE mounting archives. Windows is always tricky though because there are so many environments like native (power)shell, WSL 1,2, cygwin, and msys. |
IMHO the choice of CI, or looking for specific CI solution (jenkins/buildbot) is not as important as to have a nice suite of tests which could be easily executed. By easily I mean to not require lots of ad-hoc setup needed, and most if not all setup done in the tests fixtures. May be with formalization within At some point for our https://github.com/datalad/datalad project I have used buildbot, worked out integration with github to report back on PRs etc, but it was just extra burden to maintain etc all those environments. Now we just use appveyor, github actions, and travis (historically happened -- was the first). Appveyor is preferred by my colleague for providing easy means to login into an instance if so needed. Using some basic scripting and templating we established on github actions quite extended building/testing of git-annex not only across Windows/Linux/OSX , and not only against downstream projects (which is what refuse seems to rely somewhat on ATM), but also on some client systems: see https://github.com/datalad/git-annex/ and setup there in. And then do even more downstream testing across extensions in https://github.com/datalad/datalad-extensions -- again relying on their consistent and simple testing. So IMHO the current problem is not absence of the choice of CI solution but absence of the tests! And some tests are better than no tests, so may be the starting point could be composing some rudimentary tests here and trying to get them going e.g. on github actions across a range of OSes and Python versions? Also extra could be adding workflows to test those downstream projects mentioned in the README right in this repo (we also do that in datalad and other projects to ensure that we aren't breaking downstream). Here is some pieces from which I could try to initiate some testing if sounds useful:
|
I can probably easily formalize my tests on Linux, i.e. make them part of this repo as a point to start. The thing with FUSE in general is that it's hard to come up with "simple" tests. It's an operating system kernel interface, after all, that For each test or set of tests, you ideally want to have a clean initial state of FUSE (i.e. the kernel and Back to the simple side: Any tests that can reasonably test |
I am yet to grasp the need for going to VM or even containers per se. Again -- some testing is better than no testing. CI environments are VMs or containers already. Building images -- no need to rebuild base images -- can sweep through a few existing/pre-built when needed. FWIW here is our session level pytest fixture(s) for starting docker-compose instance to be reused by tests -- might come handy here: https://github.com/dandi/dandi-cli/blob/master/dandi/tests/fixtures.py#L401 Indeed running some IO intensive tasks on the FUSE'd filesystem is what could provide nice integration testing! The problem is that often might be difficult to discern what elementary problem leads to some misbehavior. FWIW -- we already test But may be instead here in this repo there could be a collection of CI tests based on external tools, like you already mention doing using pjdfstest and fsx for starters. Do you have some CI for that already?
Sorry, I am not following in how addition of tests would require loosing anything what works? |
A bit of a misunderstanding here: Any test is better than no test, you are absolutely right. If you have an idea what to test and how to test it, go for it. I'd be happy to support it. I tried to make a different point: At the moment,
Exactly, this is what I observed when I began running I want to maintain at least current code quality / features / reliability. This is about file systems - users relying on them can lose (valuable) data. Any test therefore fundamentally helps. Making a judgement along the lines of "this code does what it is supposed to do without loosing or unexpectedly altering data" is, however, from my perspective a fundamentally more tricky thing to proof. This is what got me thinking about containers, VMs and stuff. At the end of the day, when I invoke e.g.
|
I would start with some regression testing to establish baseline. I did that for https://github.com/git-annex-remote-rclone/git-annex-remote-rclone/ at some point and it gave a healthy push forward in development since some fears were put behind ;) |
Yes. The |
Continuing fusepy#134.
The text was updated successfully, but these errors were encountered: