diff --git a/README.md b/README.md index 04f243e..f3de3e7 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,84 @@ The goal of this is to: If you think you see a bug - write a test-case and let others reproduce it on their systems. -This is work in progress. +Quickstart +========== + +- Find your localhost owncloud server ip using e.g. `ipconfig` +- Execute smashbox run over that server e.g. `172.16.12.112:80/octest`. Ensure to mount `smashdir` and `tmp` directory to local filesystem to be able to debug test run and cache client build +- Smash wrapper will check if test nplusone exists in `lib` folder under `test_[name].py` scheme +``` +docker run \ +-e SMASHBOX_URL=:/ \ +-e SMASHBOX_USERNAME=admin \ +-e SMASHBOX_PASSWORD=admin \ +-e SMASHBOX_ACCOUNT_PASSWORD=admin \ +-e SMASHBOX_TEST_NAME=nplusone \ +-v ~/smashdir:/smashdir \ +-v /tmp:/tmp \ +owncloud/smashbox:build +``` +- Check run logs +``` +$ cat ~/smashdir/log-test_nplusone.log | grep error (..warning, critical etc) +``` +- Check client logs +``` +$ cat ~/smashdir/test_nplusone/worker0-ocsync.step01.cnt000.log | grep error (..warning, critical etc) +``` +- Check sync client directories of workers +``` +$ ls ~/smashdir/test_nplusone/worker1/ +``` +- You can also run whole integration tests suite in docker for you server +``` +./bin/run_all_integration.sh 172.16.12.112:80/octest +``` + +Important integration tests +=========================== + + * [Basic Sync and Conflicts ](lib/test_basicSync.py) + - basicSync_filesizeKB from 1kB to 50MB (normal and chunked files sync) + - basicSync_rmLocalStateDB removing local database in the test (index 0-3) or not (index 4-7) + * [Concurrently removing directory while files are being added ](lib/test_concurrentDirRemove.py) + - Currently only checks for corrupted files in the outcome + - Removing the directory while a large file is chunk-uploaded (index 0) + - Removing the directory while lots of smaller files are uploaded (index 1) + - Removing the directory before files are uploaded (index 2) + * [Resharing ](lib/oc-test/test_reshareDir.py) + - Share directory with receiver and receiver reshares one of the files with another user + * [Directory Sharing between users ](lib/oc-test/test_shareDir.py) + - Tests various sharing actions between users + * [Files Sharing between users ](lib/oc-test/test_shareFile.py) + - Tests various sharing actions between users + * [Files Sharing between users and groups ](lib/oc-test/test_shareGroup.py) + - Tests various sharing actions between users and groups + * [Files Sharing by link ](lib/oc-test/test_shareLink.py) + - Tests various sharing actions with links + * [Ensures correct behaviour having different permissions ](lib/oc-test/test_sharePermissions.py) + - Tests various sharing actions having share permissions + * [Ensures correct etag propagation 1](lib/owncloud/test_sharePropagationGroups.py) + - Tests etag propagation sharing/resharing between groups of users + * [Ensures correct etag propagation 2](lib/owncloud/test_sharePropagationInsideGroups.py) + - Tests etag propagation sharing/resharing between groups of users + * [Syncing shared mounts](lib/owncloud/test_shareMountInit.py) + - Test is oriented on syncing share mount in most sharing cases + +Important performance tests +=========================== + + * [Upload/Download of small/big files](lib/test_nplusone.py) + - Test should monitor upload/download sync time in each of the scenarious (TODO) + - Test (index 0) verifies performance of many small files - 100 files - each 1kB + - Test (index 1) verifies performance of 1 big over-chunking-size file of total size 60MB + * [Shared Mount Performance](lib/owncloud/test_shareMountInit.py) + - PROPFIND on root folder - initialize mount points (initMount is done only on 1st propfind on received shares) + - PROPFIND on root folder with initialized content and mount points + - PUT to non-shared folder + - PUT to shared folder + - GET to non-shared folder + - GET to shared folder Project tree ============ @@ -96,6 +173,8 @@ Location of sync client may be configured like this: Installation ============ +Note: Currently this framework works on Unix-like systems only. Windows port is needed. + Clone git repository into your local ``smashbox`` directory. Copy the etc/smashbox.conf.template into etc/smashbox.conf @@ -140,7 +219,10 @@ Examples: # basic test bin/smash lib/test_basicSync.py - + + # basic test, specifying test number as specified in tests' `testsets` array + bin/smash -t 0 lib/test_basicSync.py + # run a test with different paremeters bin/smash -o nplusone_nfiles=10 lib/test_nplusone.py @@ -149,6 +231,71 @@ Examples: You will find main log files in ~/smashdir/log* and all temporary files and detailed logs for each test-case in ~/smashdir/ +Monitoring integration +======================= + +Currently, monitoring module is supporting `local` and `prometheus` endpoints. Prometheus endpoint can be used in integration with Jenkins. + +By default, two values are prepared for export, 'total_duration' and 'number_of_queries', however one can embed inside the test their custom variables using e.g. `commit_to_monitoring("download_duration",time1-time0)` inside `lib/test_nplusone.py` test. + +**NOTE: To enable checking number of queries, one need to set `oc_check_diagnostic_log = True` in the `smashbox.conf` file** + +**NOTE: To enable diagnostics in SUMMARY level on the server one need to go to the server directory e.g. `/var/www/owncloud` and:** + +``` +git clone https://github.com/owncloud/diagnostics apps/diagnostics +sudo -u www-data php occ app:enable diagnostics +sudo -u www-data php occ config:system:set --value true debug +sudo -u www-data php occ config:app:set --value 1 diagnostics diagnosticLogLevel +``` + +**Export to local monitor example:** + +Executing + +``` +bin/smash -t 1 -o monitoring_type=local lib/test_nplusone.py +``` + +will execute index `1` of `test_nplusone` test and adding option flag `-o monitoring_type=local` will result in the below output if test has been completed successfully + +``` +download_duration 0.750847816467 +upload_duration 1.4001121521 +returncode 0 +elapsed 6.87230300903 +``` + +or below in case of failure + +``` +returncode 2 +elapsed 7.0446870327 +``` + +**Export to prometheus with jenkins example:** + +Executing + +``` +bin/smash -t 1 -o monitoring_type=prometheus -o endpoint=http://localhost:9091/metrics/job/jenkins/instance/smashbox -o duration_label=jenkins_smashbox_test_duration -o queries_label=jenkins_smashbox_db_queries -o owncloud=daily-master -o client=2.3.1 -o suite=nplusonet1 -o build=test_build1 lib/test_nplusone.py +``` + +will result in: + * pushing the monitoring points to the Prometheus endpoint `http://localhost:9091/metrics/job/jenkins/instance/smashbox` + * Adding flags `-o duration_label=jenkins_smashbox_test_duration` and `-o queries_label=jenkins_smashbox_db_queries` will cause default results 'total_duration' and 'number_of_queries' to be exported to Prometheus. + * Additional flags `-o owncloud=daily-master`, `-o client=2.3.1`, `-o suite=nplusonet1`, `-o build=test_build1` can be used in order to distinguish smashbox runs + +or below in case of failure to push to monitoring + +`curl: (7) Failed to connect to localhost port 9091: Connection refused` + +**Adding custom monitoring endpoint:** + +One can add their own monitoring endpoint by [adding new option](python/smashbox/utilities/monitoring.py) in `push_to_monitoring`. You can test your custom test (as in [test_nplusone](lib/test_nplusone.py)) and monitoring endpoint setting flag +`-o monitoring_type=MY_CUSTOM_MONITORING_TYPE` e.g. `-o monitoring_type=local` + +======= Different client/server ======================= diff --git a/bin/config_gen b/bin/config_gen new file mode 100755 index 0000000..9bf9c76 --- /dev/null +++ b/bin/config_gen @@ -0,0 +1,87 @@ +#!/usr/bin/env python + +import sys, os.path +# insert the path to cernafs based on the relative position of this scrip inside the service directory tree +exeDir = os.path.abspath(os.path.normpath(os.path.dirname(sys.argv[0]))) +pythonDir = os.path.join(os.path.dirname(exeDir), 'python' ) +sys.path.insert(0, pythonDir) +etcDir = os.path.join(os.path.dirname(exeDir), 'etc') +defaultTemplateFile = os.path.join(etcDir, 'smashbox.conf.template-owncloud') +defaultOutputFile = os.path.join(etcDir, 'smashbox.conf') + +import smashbox.configgen.generator as generator +import smashbox.configgen.processors as processors +from smashbox.configgen.processors_hooks import LoggingHook +import logging +import argparse +import json + +parser = argparse.ArgumentParser(description='Config generator for smashbox') +parser.add_argument('-i', default=defaultTemplateFile, help='template file to be used', dest='input_file') +parser.add_argument('-o', default=defaultOutputFile, help='output file', dest='output_file') +group = parser.add_mutually_exclusive_group() +group.add_argument('--no-ask', default=None, action='store_false', help='don\'t ask for required keys', dest='ask_keys') +group.add_argument('--ask', default=None, action='store_true', help='ask for required keys', dest='ask_keys') +parser.add_argument('-k', default=[], action='append', required=False, help='key=value pairs', dest='keys') +parser.add_argument('-kt', default=[], action='append', required=False, help='key=type pairs', dest='key_types') +parser.add_argument('--key-value-file', help='json file containing key-value pairs. The file format should something like {keyname: {value: value, type: type}, oc_server: {value: server.com, type: string}, oc_ssl_enable: {value: True, type: bool}}') +parser.add_argument('--logfile', help='write logs in this file') +args = parser.parse_args() + +global_vars = {} +local_vars = {} +with open(args.input_file) as ifile: + code = compile(ifile.read(), args.input_file, 'exec') + exec(code, global_vars, local_vars) + +overwrite_dict = {} + +if args.key_value_file: + with open(args.key_value_file, 'r') as f: + data = json.load(f) + if type(data) is dict: + for data_element in data: + key = data_element + value = str(data[data_element]['value']) + if 'type' in data[data_element]: + value = processors.convert_string_to_type(value, data[data_element]['type']) + overwrite_dict[key] = value + +# convert the keys argument to a dictionary +key_list = [item.split('=', 1) for item in args.keys] +key_dict = dict(key_list) + +# convert the key_types to [[key, type],[key, type]] and change the type of the values +key_type_list = [item.split('=', 1) for item in args.key_types] +for keytype in key_type_list: + if keytype[0] in key_dict: + key_dict[keytype[0]] = processors.convert_string_to_type(key_dict[keytype[0]], keytype[1]) +overwrite_dict.update(key_dict) + +config_generator = generator.Generator() +config_generator.set_processors_from_data(local_vars['_configgen']) + +if args.ask_keys is not None: + processor = config_generator.get_processor_by_name('RequiredKeysProcessor') + if processor is not None: + processor.set_ask(args.ask_keys) + +if overwrite_dict: + # we need to overwrite keys + processor2 = config_generator.get_processor_by_name('OverwritterProcessor') + if processor2 is not None: + processor2.set_dict_to_merge(overwrite_dict) + +# setup logging for each processor +if args.logfile: + logging.basicConfig(level=logging.NOTSET, format='%(asctime)-15s %(levelname)s %(name)s : %(message)s', filename=args.logfile) + for p in config_generator.get_processor_list(): + processor_name = p.get_name() + logger = logging.getLogger('%s.%s' % (__name__, processor_name)) + p.register_observer('logger', LoggingHook(logger, logging.INFO)) + + logging.getLogger(__name__).info('ready to start the generation') + +# generate the config file +config_generator.process_data_to_file(local_vars, args.output_file) + diff --git a/bin/run_all_integration.sh b/bin/run_all_integration.sh new file mode 100755 index 0000000..9cd8165 --- /dev/null +++ b/bin/run_all_integration.sh @@ -0,0 +1,28 @@ +#!/bin/bash + +if [ -z "$1" ] ; then + echo "Running from etc/smashbox.conf locally" + + if [ ! -f requirements.txt ]; then + echo "bin/smash not found in this directory, cd to smashbox root dir then run this script" + exit 1 + fi + + CMD="bin/smash -v -a" +else + echo "Running from in docker again server ip $1" + CMD="docker run -e SMASHBOX_URL=$1 -e SMASHBOX_USERNAME=admin -e SMASHBOX_PASSWORD=admin -e SMASHBOX_ACCOUNT_PASSWORD=admin owncloud/smashbox" +fi + +$CMD lib/test_basicSync.py && \ +$CMD lib/test_concurrentDirRemove.py && \ +$CMD lib/test_nplusone.py && \ +$CMD lib/oc-tests/test_reshareDir.py && \ +$CMD lib/oc-tests/test_shareDir.py && \ +$CMD lib/oc-tests/test_shareFile.py && \ +$CMD lib/oc-tests/test_shareGroup.py && \ +$CMD lib/oc-tests/test_shareLink.py && \ +$CMD lib/oc-tests/test_sharePermissions.py && \ +$CMD lib/owncloud/test_shareMountInit.py && \ +$CMD lib/owncloud/test_sharePropagationGroups.py && \ +$CMD lib/owncloud/test_sharePropagationInsideGroups.py diff --git a/etc/smashbox.conf.template b/etc/smashbox.conf.template index c0b4c0e..3ad9ffe 100644 --- a/etc/smashbox.conf.template +++ b/etc/smashbox.conf.template @@ -138,3 +138,8 @@ oc_server_log_user = "www-data" # Reset the server log file and verify that no exceptions and other known errors have been logged # oc_check_server_log = False + +# +# Reset the diagnostic log file and use diagnostics for assertions +# +oc_check_diagnostic_log = False diff --git a/etc/smashbox.conf.template-owncloud b/etc/smashbox.conf.template-owncloud new file mode 100644 index 0000000..934131a --- /dev/null +++ b/etc/smashbox.conf.template-owncloud @@ -0,0 +1,165 @@ +# +# The _open_SmashBox Project. +# +# Author: Jakub T. Moscicki, CERN, 2013 +# License: AGPL +# +# this is the main config file template: copy to smashbox.conf and adjust the settings +# +# this template should work without changes if you are running your tests directly on the owncloud application server +# + +# this is the top directory where all local working files are kept (test working direcotires, test logs, test data, temporary filesets, ..) +smashdir = "~/smashdir" + +# name of the account used for testing +# if None then account name is chosen automatically (based on the test name) +oc_account_name=None + +# default number of users for tests involving multiple users (user number is appended to the oc_account_name) +# this only applies to the tests involving multiple users +oc_number_test_users=3 + +# name of the group used for testing +oc_group_name=None + +# default number of groups for tests involving multiple groups (group number is appended to the oc_group_name) +# this only applies to the tests involving multiple groups +oc_number_test_groups=1 + +# password for test accounts: all test account will have the same password +# if not set then it's an error +oc_account_password="demo" + +# owncloud test server +# if left blank or "localhost" then the real hostname of the localhost will be set +oc_server = '' + + +# root of the owncloud installation as visible in the URL +oc_root = 'owncloud' + +# webdav endpoint URI within the oc_server +import os.path +oc_webdav_endpoint = os.path.join(oc_root,'remote.php/webdav') # standard owncloud server + +# target folder on the server (this may not be compatible with all tests) +oc_server_folder = '' + +# should we use protocols with SSL (https, ownclouds) +oc_ssl_enabled = True + +# how to invoke shell commands on the server +# for localhost there is no problem - leave it blank +# for remote host it may be set like this: "ssh -t -l root $oc_server" +# note: configure ssh for passwordless login +# note: -t option is to make it possible to run sudo +oc_server_shell_cmd = "" + +# Data directory on the owncloud server. +# +oc_server_datadirectory = os.path.join('/var/www/html',oc_root, 'data') + +# a path to server side tools (create_user.php, ...) +# +# it may be specified as relative path "dir" and then resolves to +# /dir where is the top-level of of the tree +# containing THIS configuration file +# + +oc_server_tools_path = "server-tools" + +# a path to ocsync command with options +# this path should work for all client hosts +# +# it may be specified as relative path "dir" and then resolves to +# /dir where is the top-level of of the tree +# containing THIS configuration file +# +oc_sync_cmd = "client/build/mirall/bin/owncloudcmd --trust" + +# number of times to repeat ocsync run every time +oc_sync_repeat = 1 + +#################################### + +# unique identifier of your test run +# if None then the runid is chosen automatically (and stored in this variable) +runid = None + +# if True then the local working directory path will have the runid added to it automatically +workdir_runid_enabled=False + +# if True then the runid will be part of the oc_account_name automatically +oc_account_runid_enabled=False + +#################################### + +# this defines the default account cleanup procedure +# - "delete": delete account if exists and then create a new account with the same name +# - "keep": don't delete existing account but create one if needed +# +# these are not implemeted yet: +# - "sync_delete": delete all files via a sync run +# - "webdav_delete": delete all files via webdav DELETE request +# - "filesystem_delete": delete all files directly on the server's filesystem +oc_account_reset_procedure = "delete" + +# this defined the default local run directory reset procedure +# - "delete": delete everything in the local run directory prior to running the test +# - "keep": keep all files (from the previous run) +rundir_reset_procedure = "delete" + +web_user = "www-data" + +oc_admin_user = "at_admin" +oc_admin_password = "admin" + +# cleanup imported namespaces +del os + +# Verbosity of curl client. +# If none then verbosity is on when smashbox run in --debug mode. +# set it to True or False to override +# +pycurl_verbose = None + +# scp port to be used in scp commands, used primarily when copying over the server log file +scp_port = 22 + +# user that can r+w the owncloud.log file (needs to be configured for passwordless login) +oc_server_log_user = "www-data" + +# +# Reset the server log file and verify that no exceptions and other known errors have been logged +# +oc_check_server_log = False + +# +# Reset the diagnostic log file and use diagnostics for assertions +# +oc_check_diagnostic_log = False + +from collections import OrderedDict +_configgen = OrderedDict([('KeyRemoverProcessor', + {'keylist': ('_configgen', 'oc_server', 'oc_ssl_enabled', + 'oc_admin_user', 'oc_admin_password', + 'oc_root', 'oc_webdav_endpoint', 'oc_server_shell_cmd', + 'oc_sync_cmd', 'scp_port')}), + ('OverwritterProcessor', + {'dict_to_merge': {}}), + ('RequiredKeysProcessor', + {'keylist': [ + {'name': 'oc_server', 'help_text': 'ip or hostname of the server where owncloud is located, including the port, such as "10.20.30.40:8080"'}, + {'name': 'oc_ssl_enabled', 'type': 'bool', 'default': False, 'help_text': 'if you access to the server through https, set this to True'}, + {'name': 'oc_root', 'help_text': 'the path for the url to be added after the server. To access to "http://server.com/owncloud" use "owncloud", leave it empty if you want to access to "http://server.com/"'}, + {'name': 'oc_webdav_endpoint', 'help_text': 'the path for the webdav endpoint. If the webdav endpoint is in "http://server.com/owncloud/remote.php/webdav" use "owncloud/remote.php/webdav"', 'default': 'remote.php/webdav'}, + {'name': 'oc_admin_user', 'default':'admin'}, + {'name': 'oc_admin_password', 'default': 'Password'}, + {'name': 'oc_server_shell_cmd', 'help_text': 'ssh command to connect to the server such as "ssh -t -l root " (include the server). Leave it empty if the server is localhost'}, + {'name': 'scp_port', 'type': 'int', 'default': 22, 'help_text': 'port for scp commands accessing the owncloud server'}, + {'name': 'oc_sync_cmd', 'default': '/usr/bin/owncloudcmd --trust', 'help_text': 'owncloudcmd command. Use the absolute path to the app and any required option'}, + ], + 'ask': True}), + ('SortProcessor', None)]) +del OrderedDict diff --git a/lib/examples/test_hello.py b/lib/examples/test_hello.py new file mode 100644 index 0000000..5f33e72 --- /dev/null +++ b/lib/examples/test_hello.py @@ -0,0 +1,129 @@ + +# this is all-in-one example which shows various aspects of the smashbox framework + +# import utilities which are the building block of each testcase +from smashbox.utilities import * +from smashbox.utilities import reflection + +# all normal output should go via logger.info() +# all additional output should go via logger.debug() + +logger.info("THIS IS A HELLO WORLD EXAMPLE") + +logger.debug("globals() %s",globals().keys()) + + +# Workers run as independent processes and wait for each other at each +# defined step: a worker will not enter step(N) until all others have +# completed step(N-1). A worker waiting at step(N) has already +# implicitly completed all steps 0..N-1 + +@add_worker +def helloA(step): + logger.debug("globals() %s",globals().keys()) + + + # Sharing of variables between workers - see below. + shared = reflection.getSharedObject() + + step(0,'defining n') + + shared['n'] = 111 + + # Variable 'n' is now shared and visible to all the workers. + # This happens when a value of the variable is assigned. + # + # Limitations: Workers A and B should not modify the same shared + # variable in parallel (that it in the same step). Also that + # worker that sets the variable should do it in a step preceding + # the steps in which other workers are making use of it. Only this + # will guarantee that the value is set before someone else is + # trying to make use of it. + # + # If you need more than one worker to modify the same + # shared variable make sure this happens in separate steps. + + + step(1,'defining xyz') + + # Contrary to the plain types (string,int,float) here we share a list - see limitations below. + shared['xyz'] = [1,2,3] + + # If you modify the value in place of a shared.attribute + # (e.g. list.append or list.sort) then this is NOT visible to other + # processes until you really make the assignment. + # + # Some ideas how to handle lists by assigning a new value: + # * use shared['list']+=[a] instead of shared['list'].append(a) + # * use shared['list']=sorted(shared['list']) instead of shared['list'].sort() + # + step(2,'waiting...') + + step(3,'checking integrity') + + # this is an non-fatal assert - error will be rerpoted and test marked as failed but execution will continue + error_check(shared['n']==222, 'problem handling shared n=%d'%shared['n']) + + # this is a fatal assert - execution will stop immediately + fatal_check(list(shared['xyz'])==[1,2,3,4], 'problem handlign shared xyz=%s'%repr(shared['xyz'])) + +@add_worker +def helloB(step): + logger.debug("dir() %s",dir()) + + shared = reflection.getSharedObject() + + step(2,'modifying and reassigning n, xyz') + shared['n'] += 111 + shared['xyz'] += [4] + + step(3, 'checking integrity') + error_check(shared['n']==222, 'problem handling shared n=%d'%shared['n']) + error_check(list(shared['xyz'])==[1,2,3,4], 'problem handlign shared xyz=%s'%repr(shared['xyz'])) + + +@add_worker +def reporter(step): + shared = reflection.getSharedObject() + + # report on shared objects at every step + for i in range(5): # until the last step used in this example + step(i) + logger.info("shared: %s",str(shared)) + +# this shows how workers with the same function body may be added in a loop any number of times + +# shared.k is an example on how NOT to use the shared object -- see comments at the top of this file +# this worker code will run N times in parallel -- see below +def any_worker(step): + shared=reflection.getSharedObject() + + shared['k'] = 0 + + step(1,None) + + shared['k'] += 1 + + step(2,None) + shared['k'] += 1 + + step(3,None) + shared['k'] += 1 + + step(4,'finish') + + logger.info("k=%d, expected %d",shared['k'],N*3) + # one would assume here that shared.k == N*3, however as the + # assignments to shared.k are not atomic and may happen in parallel this is not reliable. + # just don't do this kind of thing! + +# this shows how one may add configuration parameters to the testcase +N = int(config.get('n_hello_workers',5)) + +logger.info("will create %d additional workers",N) + +# here we add the workers (and append the number to each name) +for i in range(N): + add_worker(any_worker,'any_worker%d'%i) + + diff --git a/lib/examples/test_hello2.py b/lib/examples/test_hello2.py new file mode 100644 index 0000000..6105551 --- /dev/null +++ b/lib/examples/test_hello2.py @@ -0,0 +1,26 @@ + +# this example shows the logic of handling fatal errors + +# import utilities which are the building block of each testcase +from smashbox.utilities import * +from smashbox.utilities import reflection + +@add_worker +def helloA(step): + step(1) + + fatal_check(False,'this is a FATAL error') + + +@add_worker +def helloB(step): + step(1) + + fatal_check(False,'this is a FATAL error') + + +@add_worker +def helloC(step): + step(2) + + logger.error('Executing post fatal handler') diff --git a/lib/examples/test_hello3.py b/lib/examples/test_hello3.py new file mode 100644 index 0000000..444f024 --- /dev/null +++ b/lib/examples/test_hello3.py @@ -0,0 +1,33 @@ + +__doc__ = "This example shows the testcases." + +# import utilities which are the building block of each testcase +from smashbox.utilities import * + +A = int(config.get('hello3_A',0)) +B = int(config.get('hello3_B',0)) + +testsets = [{"hello3_A":1,"hello3_B":2},{"hello3_A":111,"hello3_B":222}] + +logger.info("Loading tescase module...") + +@add_worker +def helloA(step): + step(1) + + logger.info("My A=%d",A) + list_files('.') + +@add_worker +def helloB(step): + step(2) + + logger.info("My B=%d",B) + + + +@add_worker +def helloC(step): + step(3) + + logger.info("My A+B=%d",A+B) diff --git a/lib/oc-tests/test_reshareDir.py b/lib/oc-tests/test_reshareDir.py index da8cce2..57b5e76 100644 --- a/lib/oc-tests/test_reshareDir.py +++ b/lib/oc-tests/test_reshareDir.py @@ -46,8 +46,31 @@ filesizeKB = int(config.get('share_filesizeKB',10)) +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker def setup(step): + if finish_if_not_capable(): + return step (1, 'create test users') reset_owncloud_account(num_test_users=config.oc_number_test_users) @@ -57,6 +80,8 @@ def setup(step): @add_worker def sharer(step): + if finish_if_not_capable(): + return step (2, 'Create workdir') d = make_workdir() @@ -76,7 +101,7 @@ def sharer(step): logger.info('md5_sharer: %s',shared['md5_sharer']) list_files(d) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (4, 'Sharer shares directory') @@ -91,13 +116,15 @@ def sharer(step): @add_worker def shareeOne(step): + if finish_if_not_capable(): + return step (2, 'Sharee One creates workdir') d = make_workdir() step (5, 'Sharee One syncs and validates directory exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedDir = os.path.join(d,'localShareDir') @@ -125,7 +152,7 @@ def shareeTwo(step): step (7, 'Sharee two validates share file') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') diff --git a/lib/oc-tests/test_shareDir.py b/lib/oc-tests/test_shareDir.py index 125303e..eb8b38c 100644 --- a/lib/oc-tests/test_shareDir.py +++ b/lib/oc-tests/test_shareDir.py @@ -90,8 +90,31 @@ OCS_PERMISSION_SHARE = 16 OCS_PERMISSION_ALL = 31 +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker def setup(step): + if finish_if_not_capable(): + return step (1, 'create test users') reset_owncloud_account(num_test_users=config.oc_number_test_users) @@ -101,6 +124,8 @@ def setup(step): @add_worker def sharer(step): + if finish_if_not_capable(): + return step (2, 'Create workdir') d = make_workdir() @@ -120,7 +145,7 @@ def sharer(step): logger.info('md5_sharer: %s',shared['md5_sharer']) list_files(d) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (4, 'Sharer shares directory') @@ -133,15 +158,15 @@ def sharer(step): shared['SHARE_LOCAL_DIR'] = share_file_with_user ('localShareDir', user1, user2, **kwargs) step (7, 'Sharer validates modified file') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) expect_modified(os.path.join(localDir,'TEST_FILE_MODIFIED_USER_SHARE.dat'), shared['md5_sharer']) step (9, 'Sharer validates newly added file') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) expect_exists(os.path.join(localDir,'TEST_FILE_NEW_USER_SHARE.dat')) step (11, 'Sharer validates deleted file') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) expect_does_not_exist(os.path.join(localDir,'TEST_FILE_NEW_USER_SHARE.dat')) step (16, 'Sharer unshares the directory') @@ -151,13 +176,15 @@ def sharer(step): @add_worker def shareeOne(step): + if finish_if_not_capable(): + return step (2, 'Sharee One creates workdir') d = make_workdir() step (5, 'Sharee One syncs and validates directory exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedDir = os.path.join(d,'localShareDir') @@ -167,18 +194,18 @@ def shareeOne(step): step (6, 'Sharee One modifies TEST_FILE_MODIFIED_USER_SHARE.dat') modify_file(os.path.join(d,'localShareDir/TEST_FILE_MODIFIED_USER_SHARE.dat'),'1',count=10,bs=filesizeKB) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (8, 'Sharee One adds a file to the directory') createfile(os.path.join(d,'localShareDir/TEST_FILE_NEW_USER_SHARE.dat'),'0',count=1000,bs=filesizeKB) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (10, 'Sharee One deletes a file from the directory') fileToDelete = os.path.join(d,'localShareDir/TEST_FILE_NEW_USER_SHARE.dat') delete_file (fileToDelete) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (12, 'Sharee One share files with user 3') @@ -197,7 +224,7 @@ def shareeOne(step): step (17, 'Sharee One syncs and validates directory does not exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'localShareDir') @@ -208,6 +235,8 @@ def shareeOne(step): @add_worker def shareeTwo(step): + if finish_if_not_capable(): + return step (2, 'Sharee Two creates workdir') d = make_workdir() @@ -218,12 +247,12 @@ def shareeTwo(step): # Do we want to test the client's conflict resolution or the server's? # Currently we test the server, to test the client comment out the sync below - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (13, 'Sharee two validates share file') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') @@ -232,7 +261,7 @@ def shareeTwo(step): step (15, 'Sharee two validates directory re-share') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) localDir = os.path.join(d,'localShareDir') @@ -256,7 +285,7 @@ def shareeTwo(step): else: step(18, 'Sharee two syncs and validates directory does still exist') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) localDir = os.path.join(d, 'localShareDir') diff --git a/lib/oc-tests/test_shareFile.py b/lib/oc-tests/test_shareFile.py index bb14093..5213ad8 100644 --- a/lib/oc-tests/test_shareFile.py +++ b/lib/oc-tests/test_shareFile.py @@ -70,20 +70,49 @@ filesizeKB = int(config.get('share_filesizeKB',10)) sharePermissions = config.get('test_sharePermissions', OCS_PERMISSION_ALL) +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + testsets = [ { - 'test_sharePermissions':OCS_PERMISSION_ALL + 'test_sharePermissions':OCS_PERMISSION_ALL, + 'use_new_dav_endpoint':True + }, + { + 'test_sharePermissions':OCS_PERMISSION_ALL, + 'use_new_dav_endpoint':False }, { - 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE + 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE, + 'use_new_dav_endpoint':True + }, + { + 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE, + 'use_new_dav_endpoint':False }, { - 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_SHARE + 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_SHARE, + 'use_new_dav_endpoint':True + }, + { + 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_SHARE, + 'use_new_dav_endpoint':False } ] +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker def setup(step): + if finish_if_not_capable(): + return step (1, 'create test users') reset_owncloud_account(num_test_users=config.oc_number_test_users) @@ -93,6 +122,8 @@ def setup(step): @add_worker def sharer(step): + if finish_if_not_capable(): + return step (2, 'Create workdir') d = make_workdir() @@ -108,7 +139,7 @@ def sharer(step): logger.info('md5_sharer: %s',shared['md5_sharer']) list_files(d) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (4,'Sharer shares files') @@ -123,7 +154,7 @@ def sharer(step): shared['sharer.TEST_FILE_MODIFIED_USER_SHARE'] = os.path.join(d,'TEST_FILE_MODIFIED_USER_SHARE.dat') step (7, 'Sharer validates modified file') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) if not sharePermissions & OCS_PERMISSION_UPDATE: expect_not_modified(os.path.join(d,'TEST_FILE_MODIFIED_USER_SHARE.dat'), shared['md5_sharer']) @@ -137,20 +168,22 @@ def sharer(step): list_files(d) remove_file(os.path.join(d,'TEST_FILE_USER_SHARE.dat')) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (14, 'Sharer final step') @add_worker def shareeOne(step): + if finish_if_not_capable(): + return step (2, 'Sharee One creates workdir') d = make_workdir() step (5, 'Sharee One syncs and validate files exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_SHARE.dat') @@ -168,7 +201,7 @@ def shareeOne(step): step (6, 'Sharee One modifies TEST_FILE_MODIFIED_USER_SHARE.dat') modify_file(os.path.join(d,'TEST_FILE_MODIFIED_USER_SHARE.dat'),'1',count=10,bs=filesizeKB) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) shared = reflection.getSharedObject() @@ -190,7 +223,7 @@ def shareeOne(step): step (11, 'Sharee one validates file does not exist after unsharing') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') @@ -199,7 +232,7 @@ def shareeOne(step): step (13, 'Sharee syncs and validates file does not exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_SHARE.dat') @@ -210,13 +243,15 @@ def shareeOne(step): @add_worker def shareeTwo(step): + if finish_if_not_capable(): + return step (2, 'Sharee Two creates workdir') d = make_workdir() step (9, 'Sharee two validates share file') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') @@ -230,7 +265,7 @@ def shareeTwo(step): if compare_oc_version('9.0', '<') or not sharePermissions & OCS_PERMISSION_SHARE: step(11, 'Sharee two validates file does not exist after unsharing') - run_ocsync(d, user_num=3) + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) shared_file = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') @@ -240,7 +275,7 @@ def shareeTwo(step): else: step(11, 'Sharee two validates file still exist after unsharing for sharee one') - run_ocsync(d, user_num=3) + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) shared_file = os.path.join(d,'TEST_FILE_USER_RESHARE.dat') diff --git a/lib/oc-tests/test_shareGroup.py b/lib/oc-tests/test_shareGroup.py index fbc9e45..88c1c38 100644 --- a/lib/oc-tests/test_shareGroup.py +++ b/lib/oc-tests/test_shareGroup.py @@ -81,8 +81,31 @@ OCS_PERMISSION_SHARE = 16 OCS_PERMISSION_ALL = 31 +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker def setup(step): + if finish_if_not_capable(): + return step (1, 'create test users') reset_owncloud_account(num_test_users=config.oc_number_test_users) @@ -99,6 +122,8 @@ def setup(step): @add_worker def sharer(step): + if finish_if_not_capable(): + return step (2, 'Create workdir') d = make_workdir() @@ -114,7 +139,7 @@ def sharer(step): logger.info('md5_sharer: %s',shared['md5_sharer']) list_files(d) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (4, 'Sharer shares files') @@ -128,7 +153,7 @@ def sharer(step): shared['TEST_FILE_MODIFIED_GROUP_SHARE'] = share_file_with_group ('TEST_FILE_MODIFIED_GROUP_SHARE.dat', user1, group, **kwargs) step (7, 'Sharer validates modified file') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) expect_modified(os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat'), shared['md5_sharer'], comment=" compared to original file from sharer") expect_not_modified(os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat'), shared['md5_shareeGroup'], comment=" compared to file from Sharee Group") @@ -140,20 +165,22 @@ def sharer(step): list_files(d) remove_file(os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat')) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (16, 'Sharer Final step') @add_worker def shareeGroup(step): + if finish_if_not_capable(): + return step (2, 'Sharee Group creates workdir') d = make_workdir() step (5, 'Sharee Group syncs and validate files do exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_SHARE.dat') @@ -173,7 +200,7 @@ def shareeGroup(step): modify_file(os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat'),'1',count=10,bs=filesizeKB) shared = reflection.getSharedObject() shared['md5_shareeGroup'] = md5sum(os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat')) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (8, 'Sharee Group shares file with Sharee One') @@ -185,7 +212,7 @@ def shareeGroup(step): step (11, 'Sharee Group validates file does not exist after unsharing') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_RESHARE.dat') @@ -194,7 +221,7 @@ def shareeGroup(step): step (13, 'Sharee Group validates file does not exist after deleting') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_MODIFIED_GROUP_SHARE.dat') @@ -203,7 +230,7 @@ def shareeGroup(step): step (15, 'Sharee Group validates file does not exist after being removed from group') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_SHARE.dat') @@ -214,13 +241,15 @@ def shareeGroup(step): @add_worker def reshareeUser(step): + if finish_if_not_capable(): + return step (2, 'Re-Sharee User creates workdir') d = make_workdir() step (5, 'Re-Sharee User syncs and validate files do not exist') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_SHARE.dat') @@ -237,7 +266,7 @@ def reshareeUser(step): step (9, 'Re-Sharee User validates share file') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_RESHARE.dat') @@ -249,7 +278,7 @@ def reshareeUser(step): else: step(11, 'Re-Sharee User validates file does still exist after unsharing') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_RESHARE.dat') @@ -262,7 +291,7 @@ def reshareeUser(step): step (13, 'Re-Sharee User syncs and validates file does not exist') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedFile = os.path.join(d,'TEST_FILE_GROUP_SHARE.dat') @@ -273,7 +302,8 @@ def reshareeUser(step): @add_worker def admin(step): - + if finish_if_not_capable(): + return step (14, 'Admin user removes user from group') diff --git a/lib/oc-tests/test_shareLink.py b/lib/oc-tests/test_shareLink.py new file mode 100644 index 0000000..767a5e2 --- /dev/null +++ b/lib/oc-tests/test_shareLink.py @@ -0,0 +1,340 @@ +from smashbox.utilities import * + +__doc__ = """ + +Test basic file sharing by link + +Covers: + * Single file share: https://github.com/owncloud/core/pull/19619 + * Folder share, single file direct download (click on file list) + * Folder share, select single file and download (checkbox) + * Folder share, select multiple files and download (checkbox) + * Folder share, download full folder + +""" + + +filesize_kb = int(config.get('share_filesizeKB', 10)) + +test_downloader = config.get('test_downloader', 'full_folder') + +testsets = [ + { + 'test_downloader': 'single_file' + }, + { + 'test_downloader': 'direct_single_files' + }, + { + 'test_downloader': 'selected_single_files' + }, + { + 'test_downloader': 'full_folder' + }, + { + 'test_downloader': 'full_subfolder' + }, + { + 'test_downloader': 'selected_files' + } +] + + +@add_worker +def setup(step): + + step(1, 'create test users') + reset_owncloud_account(num_test_users=2) + check_users(2) + + reset_rundir() + reset_server_log_file() + + step(6, 'Validate server log file is clean') + + d = make_workdir() + scrape_log_file(d) + + +@add_worker +def sharer(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(3, 'Create initial test files and directories') + + proc_name = reflection.getProcessName() + dir_name = os.path.join(proc_name, 'localShareDir') + local_dir = make_workdir(dir_name) + subdir_dir = make_workdir(os.path.join(dir_name, 'subdir')) + + createfile(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE1.txt'), '1', count=1000, bs=filesize_kb) + createfile(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE2.txt'), '2', count=1000, bs=filesize_kb) + createfile(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE3.txt'), '3', count=1000, bs=filesize_kb) + createfile(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE4.txt'), '4', count=1000, bs=filesize_kb) + createfile(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE5.txt'), '5', count=1000, bs=filesize_kb) + createfile(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE6.txt'), '6', count=1000, bs=filesize_kb) + shared = reflection.getSharedObject() + shared['MD5_TEST_FILE_LINK_SHARE1'] = md5sum(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE1.txt')) + shared['MD5_TEST_FILE_LINK_SHARE2'] = md5sum(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE2.txt')) + shared['MD5_TEST_FILE_LINK_SHARE3'] = md5sum(os.path.join(local_dir, 'TEST_FILE_LINK_SHARE3.txt')) + shared['MD5_TEST_FILE_LINK_SHARE4'] = md5sum(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE4.txt')) + shared['MD5_TEST_FILE_LINK_SHARE5'] = md5sum(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE5.txt')) + shared['MD5_TEST_FILE_LINK_SHARE6'] = md5sum(os.path.join(subdir_dir, 'TEST_FILE_LINK_SHARE6.txt')) + + list_files(d) + run_ocsync(d, user_num=1) + list_files(d) + + step(4, 'Sharer shares file as link') + + oc_api = get_oc_api() + oc_api.login("%s%i" % (config.oc_account_name, 1), config.oc_account_password) + + kwargs = {'perms': 31} + share = oc_api.share_file_with_link(os.path.join('localShareDir', 'TEST_FILE_LINK_SHARE1.txt'), **kwargs) + shared['SHARE_LINK_TOKEN_TEST_FILE_LINK_SHARE1'] = share.token + share = oc_api.share_file_with_link('localShareDir', **kwargs) + shared['SHARE_LINK_TOKEN_TEST_DIR'] = share.token + + +def public_downloader_single_file(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + url = oc_webdav_url( + remote_folder=os.path.join('index.php', 's', shared['SHARE_LINK_TOKEN_TEST_FILE_LINK_SHARE1'], 'download'), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, 'TEST_FILE_LINK_SHARE1.txt') + runcmd('curl -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + expect_not_modified(download_target, shared['MD5_TEST_FILE_LINK_SHARE1']) + + +def public_downloader_direct_single_files(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files=TEST_FILE_LINK_SHARE1.txt' + ), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, 'TEST_FILE_LINK_SHARE1.txt') + runcmd('curl -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + expect_not_modified(download_target, shared['MD5_TEST_FILE_LINK_SHARE1']) + + +def public_downloader_selected_single_files(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + + if compare_oc_version('10.0', '<'): + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files=%5B%22TEST_FILE_LINK_SHARE1.txt%22%5D' + ), + webdav_endpoint=config.oc_root + ) + else: + # Api changed in 10.0 + # http://localhost/owncloudtest/index.php/s/Q3ZMB4S8xveM2x5/download?path=%2F&files[]=TEST_FILE_LINK_SHARE1.txt&files[]=TEST_FILE_LINK_SHARE2.txt + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files%5B%5D=TEST_FILE_LINK_SHARE1.txt' + ), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, 'TEST_FILE_LINK_SHARE1.txt') + runcmd('curl -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + expect_not_modified(download_target, shared['MD5_TEST_FILE_LINK_SHARE1']) + + +def public_downloader_full_folder(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + url = oc_webdav_url( + remote_folder=os.path.join('index.php', 's', shared['SHARE_LINK_TOKEN_TEST_DIR'], 'download'), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, '%s%s' % (shared['SHARE_LINK_TOKEN_TEST_DIR'], '.zip')) + unzip_target = os.path.join(d, 'unzip') + runcmd('curl -v -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + runcmd('unzip -d %s %s' % (unzip_target, download_target)) + + list_files(d, recursive=True) + + expect_exists(os.path.join(unzip_target, 'localShareDir')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE1.txt')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE2.txt')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE3.txt')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'subdir')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE4.txt')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE5.txt')) + expect_exists(os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE6.txt')) + + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE1.txt'), + shared['MD5_TEST_FILE_LINK_SHARE1'] + ) + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE2.txt'), + shared['MD5_TEST_FILE_LINK_SHARE2'] + ) + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'TEST_FILE_LINK_SHARE3.txt'), + shared['MD5_TEST_FILE_LINK_SHARE3'] + ) + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE4.txt'), + shared['MD5_TEST_FILE_LINK_SHARE4'] + ) + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE5.txt'), + shared['MD5_TEST_FILE_LINK_SHARE5'] + ) + expect_not_modified( + os.path.join(unzip_target, 'localShareDir', 'subdir', 'TEST_FILE_LINK_SHARE6.txt'), + shared['MD5_TEST_FILE_LINK_SHARE6'] + ) + + +def public_downloader_full_subfolder(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', + 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files=subdir' + ), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, '%s%s' % (shared['SHARE_LINK_TOKEN_TEST_DIR'], '.zip')) + unzip_target = os.path.join(d, 'unzip') + runcmd('curl -v -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + runcmd('unzip -d %s %s' % (unzip_target, download_target)) + + list_files(d, recursive=True) + + expect_exists(os.path.join(unzip_target, 'subdir')) + expect_exists(os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE4.txt')) + expect_exists(os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE5.txt')) + expect_exists(os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE6.txt')) + + expect_not_modified( + os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE4.txt'), + shared['MD5_TEST_FILE_LINK_SHARE4'] + ) + expect_not_modified( + os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE5.txt'), + shared['MD5_TEST_FILE_LINK_SHARE5'] + ) + expect_not_modified( + os.path.join(unzip_target, 'subdir', 'TEST_FILE_LINK_SHARE6.txt'), + shared['MD5_TEST_FILE_LINK_SHARE6'] + ) + + +def public_downloader_selected_files(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(5, 'Downloads and validate') + + shared = reflection.getSharedObject() + + + if compare_oc_version('10.0', '<'): + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files=%5B%22TEST_FILE_LINK_SHARE1.txt%22%2C%22TEST_FILE_LINK_SHARE2.txt%22%5D' + ), + webdav_endpoint=config.oc_root + ) + else: + # Api changed in 10.0 + # http://localhost/owncloudtest/index.php/s/Q3ZMB4S8xveM2x5/download?path=%2F&files[]=TEST_FILE_LINK_SHARE1.txt&files[]=TEST_FILE_LINK_SHARE2.txt + url = oc_webdav_url( + remote_folder=os.path.join( + 'index.php', 's', + shared['SHARE_LINK_TOKEN_TEST_DIR'], + 'download?path=%2F&files%5B%5D=TEST_FILE_LINK_SHARE1.txt&files%5B%5D=TEST_FILE_LINK_SHARE2.txt' + ), + webdav_endpoint=config.oc_root + ) + + download_target = os.path.join(d, '%s%s' % (shared['SHARE_LINK_TOKEN_TEST_DIR'], '.zip')) + unzip_target = os.path.join(d, 'unzip') + runcmd('curl -v -k %s -o \'%s\' \'%s\'' % (config.get('curl_opts', ''), download_target, url)) + runcmd('unzip -d %s %s' % (unzip_target, download_target)) + + list_files(d, recursive=True) + + expect_does_not_exist(os.path.join(unzip_target, 'localShareDir')) + expect_exists(os.path.join(unzip_target, 'TEST_FILE_LINK_SHARE1.txt')) + expect_exists(os.path.join(unzip_target, 'TEST_FILE_LINK_SHARE2.txt')) + expect_does_not_exist(os.path.join(unzip_target, 'TEST_FILE_LINK_SHARE3.txt')) + + expect_not_modified( + os.path.join(unzip_target, 'TEST_FILE_LINK_SHARE1.txt'), + shared['MD5_TEST_FILE_LINK_SHARE1'] + ) + expect_not_modified( + os.path.join(unzip_target, 'TEST_FILE_LINK_SHARE2.txt'), + shared['MD5_TEST_FILE_LINK_SHARE2'] + ) + + +if test_downloader == 'single_file': + add_worker(public_downloader_single_file, name=test_downloader) +elif test_downloader == 'direct_single_files': + add_worker(public_downloader_direct_single_files, name=test_downloader) +elif test_downloader == 'selected_single_files': + add_worker(public_downloader_selected_single_files, name=test_downloader) +elif test_downloader == 'full_folder': + add_worker(public_downloader_full_folder, name=test_downloader) +elif test_downloader == 'full_subfolder': + add_worker(public_downloader_full_subfolder, name=test_downloader) +elif test_downloader == 'selected_files': + add_worker(public_downloader_selected_files, name=test_downloader) diff --git a/lib/oc-tests/test_sharePermissions.py b/lib/oc-tests/test_sharePermissions.py new file mode 100644 index 0000000..ba95ca9 --- /dev/null +++ b/lib/oc-tests/test_sharePermissions.py @@ -0,0 +1,641 @@ + +__doc__ = """ + +Test share permission enforcement + ++-----------+----------------------+------------------------+ +| Step | Owner | Recipient | +| Number | | | ++===========+======================+========================+ +| 2 | Create work dir | Create work dir | ++-----------+----------------------+------------------------+ +| 3 | Create test folder | | ++-----------+----------------------+------------------------+ +| 4 | Shares folder with | | +| | Recipient | | ++-----------+----------------------+------------------------+ +| 5 | | Check permission | +| | | enforcement for every | +| | | operation | ++-----------+----------------------+------------------------+ +| 6 | Final | Final | ++-----------+----------------------+------------------------+ + +Data Providers: + + sharePermissions_matrix: Permissions to be applied to the share, + combined with the expected result for + every file operation + +""" + +from smashbox.utilities import * + +import owncloud + +OCS_PERMISSION_READ = 1 +OCS_PERMISSION_UPDATE = 2 +OCS_PERMISSION_CREATE = 4 +OCS_PERMISSION_DELETE = 8 +OCS_PERMISSION_SHARE = 16 +OCS_PERMISSION_ALL = 31 + +ALL_OPERATIONS = [ + # a new file can be uploaded/created (file target does not exist) + 'upload', + # a file can overwrite an existing one + 'upload_overwrite', + # rename file to new name, all within the shared folder + 'rename', + # move a file from outside the shared folder into the shared folder + 'move_in', + # move a file from outside the shared folder and overwrite a file inside the shared folder + # (note: SabreDAV automatically deletes the target file first before moving, so requires DELETE permission too) + 'move_in_overwrite', + # move a file already in the shared folder into a subdir within the shared folder + 'move_in_subdir', + # move a file already in the shared folder into a subdir within the shared folder and overwrite an existing file there + 'move_in_subdir_overwrite', + # move a file to outside of the shared folder + 'move_out', + # move a file out of a subdir of the shared folder into the shared folder + 'move_out_subdir', + # copy a file from outside the shared folder into the shared folder + 'copy_in', + # copy a file from outside the shared folder and overwrite a file inside the shared folder + # (note: SabreDAV automatically deletes the target file first before copying, so requires DELETE permission too) + 'copy_in_overwrite', + # delete a file inside the shared folder + 'delete', + # create folder inside the shared folder + 'mkdir', + # delete folder inside the shared folder + 'rmdir', +] + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +""" + Permission matrix parameters (they all default to False): + + - 'permission': permissions to apply + - 'allowed_operations': allowed operations, see ALL_OPERATIONS for more info +""" +testsets = [ + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_ALL, + 'allowed_operations': [ + 'upload', + 'upload_overwrite', + 'rename', + 'move_in', + 'move_in_overwrite', + 'move_in_subdir', + 'move_in_subdir_overwrite', + 'move_out', + 'move_out_subdir', + 'copy_in', + 'copy_in_overwrite', + 'delete', + 'mkdir', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_ALL, + 'allowed_operations': [ + 'upload', + 'upload_overwrite', + 'rename', + 'move_in', + 'move_in_overwrite', + 'move_in_subdir', + 'move_in_subdir_overwrite', + 'move_out', + 'move_out_subdir', + 'copy_in', + 'copy_in_overwrite', + 'delete', + 'mkdir', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ, + 'allowed_operations': [] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ, + 'allowed_operations': [] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE, + 'allowed_operations': [ + 'upload', + 'move_in', + 'copy_in', + 'mkdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE, + 'allowed_operations': [ + 'upload', + 'move_in', + 'copy_in', + 'mkdir', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE, + 'allowed_operations': [ + 'upload_overwrite', + 'rename', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE, + 'allowed_operations': [ + 'upload_overwrite', + 'rename', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'move_out', + 'delete', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'move_out', + 'delete', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_UPDATE, + 'allowed_operations': [ + 'upload', + 'upload_overwrite', + 'rename', + 'move_in', + 'copy_in', + 'mkdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_UPDATE, + 'allowed_operations': [ + 'upload', + 'upload_overwrite', + 'rename', + 'move_in', + 'copy_in', + 'mkdir', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'upload', + 'move_in', + 'move_in_overwrite', + 'move_in_subdir', + 'move_in_subdir_overwrite', + 'move_out', + 'move_out_subdir', + 'copy_in', + 'copy_in_overwrite', + 'delete', + 'mkdir', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'upload', + 'move_in', + 'move_in_overwrite', + 'move_in_subdir', + 'move_in_subdir_overwrite', + 'move_out', + 'move_out_subdir', + 'copy_in', + 'copy_in_overwrite', + 'delete', + 'mkdir', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': True + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'upload_overwrite', + 'rename', + 'move_out', + 'delete', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': False + }, + { + 'sharePermissions_matrix': { + 'permission': OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE | OCS_PERMISSION_DELETE, + 'allowed_operations': [ + 'upload_overwrite', + 'rename', + 'move_out', + 'delete', + 'rmdir', + ] + }, + 'use_new_dav_endpoint': True + } +] + +permission_matrix = config.get('sharePermissions_matrix', testsets[0]['sharePermissions_matrix']) + +SHARED_DIR_NAME = 'shared-dir' + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def setup(step): + if finish_if_not_capable(): + return + + step (1, 'create test users') + reset_owncloud_account(num_test_users=2) + check_users(2) + + reset_rundir() + +@add_worker +def owner_worker(step): + if finish_if_not_capable(): + return + + step (2, 'Create workdir') + d = make_workdir() + + step (3, 'Create test folder') + + logger.info(permission_matrix) + perms = permission_matrix['permission'] + + mkdir(os.path.join(d, SHARED_DIR_NAME)) + mkdir(os.path.join(d, SHARED_DIR_NAME, 'subdir')) + + mkdir(os.path.join(d, SHARED_DIR_NAME, 'delete_this_dir')) + createfile(os.path.join(d, SHARED_DIR_NAME, 'move_this_out.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'move_this_to_subdir.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'move_this_to_subdir_for_overwrite.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'subdir', 'move_this_out.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'subdir', 'overwrite_this.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'rename_this.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'overwrite_this.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'overwrite_this_through_move_in.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'overwrite_this_through_copy_in.dat'),'0',count=1000,bs=1) + createfile(os.path.join(d, SHARED_DIR_NAME, 'delete_this.dat'),'0',count=1000,bs=1) + + createfile(os.path.join(d, SHARED_DIR_NAME, 'delete_this_dir', 'stuff.dat'),'0',count=1000,bs=1) + + list_files(d) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + list_files(d) + + step (4, 'Shares folder with recipient') + + user1 = "%s%i"%(config.oc_account_name, 1) + user2 = "%s%i"%(config.oc_account_name, 2) + kwargs = {'perms': perms} + share_file_with_user(SHARED_DIR_NAME, user1, user2, **kwargs) + + step (6, 'Final') + +@add_worker +def recipient_worker(step): + if finish_if_not_capable(): + return + step (2, 'Create workdir') + d = make_workdir() + + step (5, 'Check permission enforcement for every operation') + + list_files(d) + run_ocsync(d, user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) + list_files(d) + + oc = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + user2 = "%s%i" % (config.oc_account_name, 2) + oc.login(user2, config.oc_account_password) + + perms = permission_matrix['permission'] + operations_test = OperationsTest(oc, d, SHARED_DIR_NAME) + + sharedDir = os.path.join(d,SHARED_DIR_NAME) + logger.info ('Checking that %s is present in local directory for recipient_worker', sharedDir) + expect_exists(sharedDir) + + for operation in ALL_OPERATIONS: + # call the matching operation method + expected_success = operation in permission_matrix['allowed_operations'] + success_message = "allowed" + if not expected_success: + success_message = "forbidden" + + error_check( + getattr(operations_test, operation)(expected_success), + 'Operation "%s" must be %s when share permissions are %i' % (operation, success_message, perms) + ) + + step (6, 'Final') + + +class OperationsTest(object): + def __init__(self, oc, work_dir, shared_dir): + self.oc = oc + self.work_dir = work_dir + self.shared_dir = shared_dir + self._testFileId = 0 + + def _make_test_file(self): + # note: the name doesn't matter for the tests + test_file = os.path.join(self.work_dir, 'local_test_file_%i.dat' % self._testFileId) + createfile(test_file, '0', count=1000, bs=1) + self._testFileId += 1 + return test_file + + def _upload(self, target_file, expected_success): + test_file = self._make_test_file() + try: + logger.info('Upload file to "%s"', target_file) + self.oc.put_file(target_file, test_file) + except owncloud.ResponseError as e: + if e.status_code == 403: + return not expected_success + + log_response_error(e) + return False + + if not self._file_exists(target_file): + logger.error('File %s not actually uploaded', target_file) + return False + + return expected_success + + def upload(self, expected_success = False): + target_file = os.path.join(self.shared_dir, 'test_upload.dat') + return self._upload(target_file, expected_success) + + def upload_overwrite(self, expected_success = False): + # target the existing file name + target_file = os.path.join(self.shared_dir, 'overwrite_this.dat') + return self._upload(target_file, expected_success) + + def _move(self, source_file, target_file, expected_success): + try: + logger.info('Move "%s" to "%s"', source_file, target_file) + self.oc.move(source_file, target_file) + except owncloud.ResponseError as e: + if e.status_code == 403: + return not expected_success + + log_response_error(e) + return False + + if not self._file_exists(target_file): + logger.error('%s not actually moved to %s', source_file, target_file) + return False + + return expected_success + + def _copy(self, source_file, target_file, expected_success): + try: + logger.info('Copy "%s" to "%s"', source_file, target_file) + self.oc.copy(source_file, target_file) + except owncloud.ResponseError as e: + if e.status_code == 403: + return not expected_success + + log_response_error(e) + return False + + if not self._file_exists(target_file): + logger.error('%s not actually copied to %s', source_file, target_file) + return False + + return expected_success + + def rename(self, expected_success = False): + source_file = os.path.join(self.shared_dir, 'rename_this.dat') + target_file = os.path.join(self.shared_dir, 'rename_this_renamed.dat') + return self._move(source_file, target_file, expected_success) + + def move_in(self, expected_success = False): + test_file = self._make_test_file() + target_file = 'test_move_in.dat' + + # upload the test file outside the shared dir first + self.oc.put_file(target_file, test_file) + + # then move that one into the shared dir + source_file = target_file + target_file = os.path.join(self.shared_dir, source_file) + return self._move(source_file, target_file, expected_success) + + def move_in_overwrite(self, expected_success = False): + test_file = self._make_test_file() + target_file = 'overwrite_this_through_move_in.dat' + + # upload the test file outside the shared dir first + self.oc.put_file(target_file, test_file) + + # then move that one into the shared dir + source_file = target_file + target_file = os.path.join(self.shared_dir, target_file) + return self._move(source_file, target_file, expected_success) + + def copy_in(self, expected_success = False): + test_file = self._make_test_file() + target_file = 'test_copy_in.dat' + + # upload the test file outside the shared dir first + self.oc.put_file(target_file, test_file) + + # then copy that one into the shared dir + source_file = target_file + target_file = os.path.join(self.shared_dir, source_file) + return self._copy(source_file, target_file, expected_success) + + def copy_in_overwrite(self, expected_success = False): + test_file = self._make_test_file() + target_file = 'overwrite_this_through_copy_in.dat' + + # upload the test file outside the shared dir first + self.oc.put_file(target_file, test_file) + + # then copy that one into the shared dir + source_file = target_file + target_file = os.path.join(self.shared_dir, target_file) + return self._copy(source_file, target_file, expected_success) + + def move_in_subdir(self, expected_success = False): + source_file = os.path.join(self.shared_dir, 'move_this_to_subdir.dat') + target_file = os.path.join(self.shared_dir, 'subdir', 'moved_this_to_subdir.dat') + return self._move(source_file, target_file, expected_success) + + def move_in_subdir_overwrite(self, expected_success = False): + source_file = os.path.join(self.shared_dir, 'move_this_to_subdir_for_overwrite.dat') + target_file = os.path.join(self.shared_dir, 'subdir', 'overwrite_this.dat') + return self._move(source_file, target_file, expected_success) + + def move_out(self, expected_success = False): + source_file = os.path.join(self.shared_dir, 'move_this_out.dat') + target_file = 'this_was_moved_out.dat' + return self._move(source_file, target_file, expected_success) + + def move_out_subdir(self, expected_success = False): + source_file = os.path.join(self.shared_dir, 'subdir', 'move_this_out.dat') + target_file = os.path.join(self.shared_dir, 'this_was_moved_out_of_subdir.dat') + return self._move(source_file, target_file, expected_success) + + def _delete(self, target, expected_success): + try: + logger.info('Delete "%s"', target) + self.oc.delete(target) + except owncloud.ResponseError as e: + if e.status_code == 403: + return not expected_success + + log_response_error(e) + return False + + if self._file_exists(target): + logger.error('%s not actually deleted', target) + return False + + return expected_success + + def delete(self, expected_success = False): + target = os.path.join(self.shared_dir, 'delete_this.dat') + return self._delete(target, expected_success) + + def rmdir(self, expected_success = False): + target = os.path.join(self.shared_dir, 'delete_this_dir') + return self._delete(target, expected_success) + + def mkdir(self, expected_success = False): + target = os.path.join(self.shared_dir, 'test_create_dir') + + try: + logger.info('Create folder "%s"', target) + self.oc.mkdir(target) + except owncloud.ResponseError as e: + if e.status_code == 403: + return not expected_success + + log_response_error(e) + return False + + if not self._file_exists(target): + logger.error('Folder %s not actually created', target) + return False + + return expected_success + + def _file_exists(self, remote_file): + try: + self.oc.file_info(remote_file) + return True + except owncloud.ResponseError as e: + if e.status_code == 404: + return False + # unknown error + raise(e) + + +def log_response_error(response_error): + """ + @type response_error: owncloud.ResponseError + """ + + message = response_error.get_resource_body() + + if message[:38] == '': + import xml.etree.ElementTree as ElementTree + + response_exception = '' + response_message = '' + response = message[39:] + + root_element = ElementTree.fromstringlist(response) + if root_element.tag == '{DAV:}error': + for child in root_element: + if child.tag == '{http://sabredav.org/ns}exception': + response_exception = child.text + if child.tag == '{http://sabredav.org/ns}message': + response_message = child.text + + if response_exception != '': + message = 'SabreDAV Exception: %s - Message: %s' % (response_exception, response_message) + + logger.error('Unexpected response: Status code: %i - %s' % (response_error.status_code, message)) + logger.info('Full Response: %s' % (response_error.get_resource_body())) diff --git a/lib/oc-tests/test_uploadFiles.py b/lib/oc-tests/test_uploadFiles.py index 9238da1..dcf72ce 100644 --- a/lib/oc-tests/test_uploadFiles.py +++ b/lib/oc-tests/test_uploadFiles.py @@ -48,26 +48,49 @@ sharePermissions = config.get('test_sharePermissions', OCS_PERMISSION_ALL) numFilesToCreate = config.get('test_numFilesToCreate', 1) +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + testsets = [ { 'test_sharePermissions':OCS_PERMISSION_ALL, - 'test_numFilesToCreate':50, - 'test_filesizeKB':20000 + 'test_numFilesToCreate':5, + 'test_filesizeKB':20000, + 'use_new_dav_endpoint':True }, { 'test_sharePermissions':OCS_PERMISSION_ALL, - 'test_numFilesToCreate':500, - 'test_filesizeKB':2000 + 'test_numFilesToCreate':5, + 'test_filesizeKB':20000, + 'use_new_dav_endpoint':False }, { 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_UPDATE, - 'test_numFilesToCreate':50, - 'test_filesizeKB':20000 + 'test_numFilesToCreate':5, + 'test_filesizeKB':20000, + 'use_new_dav_endpoint':True + }, + { + 'test_sharePermissions':OCS_PERMISSION_READ | OCS_PERMISSION_CREATE | OCS_PERMISSION_UPDATE, + 'test_numFilesToCreate':5, + 'test_filesizeKB':20000, + 'use_new_dav_endpoint':False }, ] +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker def setup(step): + if finish_if_not_capable(): + return step (1, 'create test users') reset_owncloud_account(num_test_users=config.oc_number_test_users) @@ -77,6 +100,8 @@ def setup(step): @add_worker def sharer(step): + if finish_if_not_capable(): + return step (2,'Create workdir') d = make_workdir() @@ -88,7 +113,7 @@ def sharer(step): localDir = make_workdir(dirName) list_files(d) - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) step (4,'Sharer shares directory') @@ -105,7 +130,7 @@ def sharer(step): step (7, 'Sharer validates newly added files') - run_ocsync(d,user_num=1) + run_ocsync(d,user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d+'/localShareDir') checkFilesExist(d) @@ -114,13 +139,15 @@ def sharer(step): @add_worker def shareeOne(step): + if finish_if_not_capable(): + return step (2, 'Sharee One creates workdir') d = make_workdir() step (5,'Sharee One syncs and validates directory exist') - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedDir = os.path.join(d,'localShareDir') @@ -137,7 +164,7 @@ def shareeOne(step): filename = "%s%i%s" % ('localShareDir/TEST_FILE_NEW_USER_SHARE_',i,'.dat') createfile(os.path.join(d,filename),'0',count=1000,bs=filesizeKB) - run_ocsync(d,user_num=2) + run_ocsync(d,user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d+'/localShareDir') checkFilesExist(d) @@ -146,6 +173,8 @@ def shareeOne(step): @add_worker def shareeTwo(step): + if finish_if_not_capable(): + return step (2, 'Sharee Two creates workdir') d = make_workdir() @@ -156,7 +185,7 @@ def shareeTwo(step): step (5, 'Sharee two syncs and validates directory exists') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d) sharedDir = os.path.join(d,'localShareDir') @@ -165,7 +194,7 @@ def shareeTwo(step): step (7, 'Sharee two validates new files exist') - run_ocsync(d,user_num=3) + run_ocsync(d,user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) list_files(d+'/localShareDir') checkFilesExist(d) diff --git a/lib/owncloud/test_backupRestored.py b/lib/owncloud/test_backupRestored.py new file mode 100644 index 0000000..28a57dc --- /dev/null +++ b/lib/owncloud/test_backupRestored.py @@ -0,0 +1,82 @@ + +__doc__ = """ + +This test is testing that if the data-fingerprint changes because of a backup restoration +we do not loose the newer file that were on the server +[] + +""" + +from smashbox.utilities import * +import subprocess +import glob + + +@add_worker +def workerA(step): + if compare_client_version('2.3.0', '<'): + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + #cleanup remote and local test environment - this should be run once by one worker only + reset_owncloud_account() + reset_rundir() + + + + step(0,'create initial content and sync') + + syncdir = make_workdir() + folder1 = make_workdir(os.path.join(syncdir, 'folder1')) + createfile(os.path.join(folder1, 'file.txt'), '0', count=1000, bs=50) + createfile(os.path.join(syncdir, 'file1.txt'), '0', count=1000, bs=50) + createfile(os.path.join(syncdir, 'file2.txt'), '0', count=1000, bs=50) + createfile(os.path.join(syncdir, 'file3.txt'), '0', count=1000, bs=50) + + run_ocsync(syncdir) + + + step(1,'simulate a backup restored by faking an old state') + # it is as if file1.txt was newer and thus not present in the backup + remove_file(os.path.join(syncdir, 'file1.txt')) + + # folder1 was not present on the backup + remove_tree(os.path.join(syncdir, 'folder1')) + + # file2.txt is replaced by an "older" file + createfile(os.path.join(syncdir, 'file2.txt'), '1', count=1000, bs=40) + + step(2, 'upload an the fake old state state') + run_ocsync(syncdir) + + +@add_worker +def workerB(step): + + if compare_client_version('2.3.0', '<'): + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + step(1,'sync the initial content') + + syncdir = make_workdir() + run_ocsync(syncdir) + + step(3,'simulate a backup by altering the data-fingerprint') + + #Since i can't change the data finferprint on the server, i change it on the client's database + subprocess.check_output(["sqlite3" , os.path.join(syncdir, ".csync_journal.db"), + "DELETE FROM datafingerprint; INSERT INTO datafingerprint (fingerprint) VALUES('1234');"]) + + run_ocsync(syncdir) + + error_check(os.path.isdir(os.path.join(syncdir, 'folder1')), + "folder1 should have been restored ") + + error_check(os.path.exists(os.path.join(syncdir, 'folder1/file.txt')), + "folder1/file.txt should have been restored ") + + conflict_files = get_conflict_files(syncdir) + error_check(len(conflict_files) == 1, + "file2 should have been backed up as a conflict ") + diff --git a/lib/owncloud/test_chunking.py b/lib/owncloud/test_chunking.py new file mode 100755 index 0000000..7da1b5f --- /dev/null +++ b/lib/owncloud/test_chunking.py @@ -0,0 +1,109 @@ +import os +import time +import tempfile + + +__doc__ = """ +Upload a small file "small.dat" (10 kB) +Upload a big file "big.dat" (50 MB) +Overwrite big with small file, keeping the target name +Overwrite small with big file, keeping the target name +""" + +from smashbox.utilities import * +from smashbox.utilities.hash_files import * + +small_file_size = 10 # KB +big_file_size = 50000 # KB +zero_file_size = 0 # KB + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':True + }, + { + 'use_new_dav_endpoint':False + }, +] + +def expect_content(fn,md5): + actual_md5 = md5sum(fn) + error_check(actual_md5 == md5, "inconsistent md5 of %s: expected %s, got %s"%(fn,md5,actual_md5)) + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def worker0(step): + if finish_if_not_capable(): + return + + # do not cleanup server files from previous run + reset_owncloud_account() + + # cleanup all local files for the test + reset_rundir() + + step(1,'Preparation') + shared = reflection.getSharedObject() + d = make_workdir() + run_ocsync(d) + + step(2,'Create and sync test files') + + createfile(os.path.join(d,'TEST_SMALL_TO_BIG.dat'),'0',count=1000,bs=small_file_size) + createfile(os.path.join(d,'TEST_BIG_TO_SMALL.dat'),'0',count=1000,bs=big_file_size) + #createfile(os.path.join(d,'TEST_ZERO_TO_BIG.dat'),'0',count=1000,bs=filesizeKB) + #createfile(os.path.join(d,'TEST_FILE_MODIFIED_BOTH.dat'),'0',count=1000,bs=filesizeKB) + + shared['TEST_SMALL_TO_BIG'] = md5sum(os.path.join(d,'TEST_SMALL_TO_BIG.dat')) + shared['TEST_BIG_TO_SMALL'] = md5sum(os.path.join(d,'TEST_BIG_TO_SMALL.dat')) + logger.info('TEST_SMALL_TO_BIG: %s',shared['TEST_SMALL_TO_BIG']) + logger.info('TEST_BIG_TO_SMALL: %s',shared['TEST_BIG_TO_SMALL']) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + step(5,'Sync down and check if correct') + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + expect_content(os.path.join(d,'TEST_SMALL_TO_BIG.dat'), shared['TEST_SMALL_TO_BIG']) + expect_content(os.path.join(d,'TEST_BIG_TO_SMALL.dat'), shared['TEST_BIG_TO_SMALL']) + + +@add_worker +def worker1(step): + if finish_if_not_capable(): + return + + step(3,'Preparation') + shared = reflection.getSharedObject() + d = make_workdir() + run_ocsync(d) + + expect_content(os.path.join(d,'TEST_SMALL_TO_BIG.dat'), shared['TEST_SMALL_TO_BIG']) + expect_content(os.path.join(d,'TEST_BIG_TO_SMALL.dat'), shared['TEST_BIG_TO_SMALL']) + + step(4,'Ovverwrite files') + + createfile(os.path.join(d,'TEST_SMALL_TO_BIG.dat'),'1',count=1000,bs=big_file_size) + createfile(os.path.join(d,'TEST_BIG_TO_SMALL.dat'),'1',count=1000,bs=small_file_size) + shared['TEST_SMALL_TO_BIG'] = md5sum(os.path.join(d,'TEST_SMALL_TO_BIG.dat')) + shared['TEST_BIG_TO_SMALL'] = md5sum(os.path.join(d,'TEST_BIG_TO_SMALL.dat')) + logger.info('TEST_SMALL_TO_BIG: %s',shared['TEST_SMALL_TO_BIG']) + logger.info('TEST_BIG_TO_SMALL: %s',shared['TEST_BIG_TO_SMALL']) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + step(5,'Check if correct') + expect_content(os.path.join(d,'TEST_SMALL_TO_BIG.dat'), shared['TEST_SMALL_TO_BIG']) + expect_content(os.path.join(d,'TEST_BIG_TO_SMALL.dat'), shared['TEST_BIG_TO_SMALL']) + diff --git a/lib/owncloud/test_dirBecomesFile.py b/lib/owncloud/test_dirBecomesFile.py new file mode 100644 index 0000000..6032b45 --- /dev/null +++ b/lib/owncloud/test_dirBecomesFile.py @@ -0,0 +1,143 @@ +from owncloud import HTTPResponseError + +__doc__ = """ + +Test syncing when a directory turns into a file or back. + +""" + +from smashbox.utilities import * +from shutil import rmtree + +def make_subdir(d, sub): + return make_workdir(os.path.join(d, sub)) + +def expect_webdav_isfile(path, user_num=None): + exitcode,stdout,stderr = runcmd('curl -s -k %s -XPROPFIND %s | xmllint --format -'%(config.get('curl_opts',''),oc_webdav_url(remote_folder=path, user_num=user_num))) + error_check("NotFound" not in stdout, "Remote path %s does not exist" % path) + error_check("d:collection" not in stdout, "Remote path %s is not a file" % path) + +@add_worker +def dir_to_file(step): + + if compare_client_version('2.1.0', '<='): + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + step(1, 'Create a folder and sync it') + + d = make_workdir() + folder1 = make_subdir(d, 'folder1') + folder2 = make_subdir(d, 'folder2') + + def make_folder(name): + folder = make_subdir(folder1, name) + sub_del = make_subdir(folder, 'sub_del') + sub_move = make_subdir(folder, 'sub_move') + createfile(os.path.join(folder, 'file-delete.txt'), '0', count=1000, bs=50) + createfile(os.path.join(folder, 'file-move.txt'), '1', count=1000, bs=50) + createfile(os.path.join(sub_del, 'file-sub-del.txt'), '2', count=1000, bs=50) + createfile(os.path.join(sub_move, 'file-sub-move.txt'), '3', count=1000, bs=50) + + make_folder('dirtofile') + make_folder('dirtofile2') + + # this will later replace dirtofile2 + createfile(os.path.join(folder1, 'dirtofile2-move'), '4', count=1000, bs=50) + # and later this will become dirtofile2 + dirtofile2move = make_subdir(folder1, 'dirtofile2-move2') + createfile(os.path.join(folder1, 'dirtofile2-move2', 'foo.txt'), '5', count=1000, bs=50) + + run_ocsync(folder1) + # sanity check only + expect_webdav_exist('dirtofile/file-delete.txt') + expect_webdav_exist('dirtofile2/file-delete.txt') + expect_webdav_exist('dirtofile2-move') + + # at this point, both client and server have 'dirtofile' folders + + + step(2, 'Turn the folder into a file locally and propagate to the server') + # This tests folder->file propagating to the server + + # we do this by syncing to a different folder, adjusting, syncing up again + run_ocsync(folder2) + mv(os.path.join(folder2, 'dirtofile', 'file-move.txt'), os.path.join(folder2, 'file-move.txt')) + mv(os.path.join(folder2, 'dirtofile', 'sub_move'), os.path.join(folder2, 'sub_move')) + rmtree(os.path.join(folder2, 'dirtofile')) + createfile(os.path.join(folder2, 'dirtofile'), 'N', count=1000, bs=50) + + mv(os.path.join(folder2, 'dirtofile2', 'file-move.txt'), os.path.join(folder2, 'file-move2.txt')) + mv(os.path.join(folder2, 'dirtofile2', 'sub_move'), os.path.join(folder2, 'sub_move2')) + rmtree(os.path.join(folder2, 'dirtofile2')) + mv(os.path.join(folder2, 'dirtofile2-move'), os.path.join(folder2, 'dirtofile2')) + + run_ocsync(folder2) + + error_check(os.path.isfile(os.path.join(folder2, 'dirtofile')), "expected 'dirtofile' to be a file") + expect_webdav_isfile('dirtofile') + expect_webdav_exist('file-move.txt') + expect_webdav_exist('sub_move') + error_check(os.path.isfile(os.path.join(folder2, 'dirtofile2')), "expected 'dirtofile2' to be a file") + expect_webdav_isfile('dirtofile2') + expect_webdav_exist('file-move2.txt') + expect_webdav_exist('sub_move2') + expect_webdav_does_not_exist('dirtofile2-move') + + + step(3, 'Sync the folder that became a file into the old working tree') + # This tests folder->file propagating from the server + + run_ocsync(folder1) + + # server is unchanged + expect_webdav_isfile('dirtofile') + expect_webdav_isfile('dirtofile2') + + # client has the files too + expect_exists(os.path.join(folder1, 'dirtofile')) + expect_exists(os.path.join(folder1, 'file-move.txt')) + expect_exists(os.path.join(folder1, 'sub_move/file-sub-move.txt')) + error_check(os.path.isfile(os.path.join(folder1, 'dirtofile')), "'dirtofile' didn't become a file") + expect_exists(os.path.join(folder1, 'dirtofile2')) + expect_exists(os.path.join(folder1, 'file-move2.txt')) + expect_exists(os.path.join(folder1, 'sub_move2/file-sub-move.txt')) + error_check(os.path.isfile(os.path.join(folder1, 'dirtofile2')), "'dirtofile2' didn't become a file") + expect_does_not_exist(os.path.join(folder1, 'dirtofile2-move')) + + # at this point, both client and server have a 'dirtofile' file + + + step(4, 'Turn the file into a folder locally and propagate to the server') + # This tests file->folder propagating to the server + + # we do this by syncing to a different folder, adjusting, syncing up again + run_ocsync(folder2) + + delete_file(os.path.join(folder2, 'dirtofile')) + mkdir(os.path.join(folder2, 'dirtofile')) + createfile(os.path.join(folder2, 'dirtofile', 'newfile.txt'), 'M', count=1000, bs=50) + + delete_file(os.path.join(folder2, 'dirtofile2')) + mv(os.path.join(folder2, 'dirtofile2-move2'), os.path.join(folder2, 'dirtofile2')) + + run_ocsync(folder2) + + error_check(os.path.isdir(os.path.join(folder2, 'dirtofile')), "expected 'dirtofile' to be a folder") + expect_webdav_exist('dirtofile/newfile.txt') + error_check(os.path.isdir(os.path.join(folder2, 'dirtofile2')), "expected 'dirtofile2' to be a folder") + expect_webdav_exist('dirtofile2/foo.txt') + + + step(5, 'Sync the file that became a folder into the old working tree') + # This tests file->folder propagating from the server + + run_ocsync(folder1) + + # server is unchanged + expect_webdav_exist('dirtofile/newfile.txt') + expect_webdav_exist('dirtofile2/foo.txt') + + # client has the file too, implying that dirtofile is a folder + expect_exists(os.path.join(folder1, 'dirtofile/newfile.txt')) + expect_exists(os.path.join(folder1, 'dirtofile2/foo.txt')) diff --git a/lib/owncloud/test_dirDepth.py b/lib/owncloud/test_dirDepth.py new file mode 100644 index 0000000..a346cde --- /dev/null +++ b/lib/owncloud/test_dirDepth.py @@ -0,0 +1,137 @@ + +__doc__ = """ + +Test uploading a large number of files to a directory and then syncing + ++--------+----------------------------------------------+-------------------------------------+ +| Step | Uploader | Downloader | +| Number | | | ++========+==============================================+=====================================+ +| 2 | Create work dir | Create work dir | ++--------+----------------------------------------------+-------------------------------------+ +| 3 | Create directories and files and upload them | | ++--------+----------------------------------------------+-------------------------------------+ +| 4 | Validate files have been uploaded | | ++--------+----------------------------------------------+-------------------------------------+ +| 5 | | Sync | ++--------+----------------------------------------------+-------------------------------------+ +| 6 | | Validate files have been downloaded | ++--------+----------------------------------------------+-------------------------------------+ + +Data Providers: + test_numFilesToCreate: Number of files to create + test_filesizeKB: Size of file to create in KB + dir_depth: How deep the directory structure should go + dir_depth_style: Defines if the directory layout is flat or hierarchial + +""" + +from smashbox.utilities import * +import re + +filesizeKB = int(config.get('test_filesizeKB', 10)) +numFilesToCreate = config.get('test_numFilesToCreate', 10) +dir_depth = config.get('dir_depth', 5) +style = config.get('dir_depth_style', 'nested') + +testsets = [ + { + 'dir_depth': 5, + 'test_numFilesToCreate': 50, + 'test_filesizeKB': 20, + 'dir_depth_style': 'nested', + }, + { + 'dir_depth': 5, + 'test_numFilesToCreate': 50, + 'test_filesizeKB': 200, + 'dir_depth_style': 'nested', + }, + { + 'dir_depth': 10, + 'test_numFilesToCreate': 5, + 'test_filesizeKB': 2000, + 'dir_depth_style': 'flat' + }, + { + 'dir_depth': 10, + 'test_numFilesToCreate': 5, + 'test_filesizeKB': 2000, + 'dir_depth_style': 'nested' + }, +] + + +def uploader(step): + + step(2, 'Create workdir') + d = make_workdir() + user_num = get_user_number_from_work_directory(d) + + step(3, 'Create directories and files then sync') + files = [] + + if style == 'flat': + for i in range(dir_depth): + dir_name = os.path.join(d, "%s_%d" % ('upload_dir', i)) + upload_dir = make_workdir(dir_name) + for j in range(0, numFilesToCreate): + upload_name = "%s_%d.dat" % ('TEST_FILE_NEW_USER_SHARE', j) + createfile(os.path.join(upload_dir, upload_name), '0', count=1000, bs=filesizeKB) + files.append(os.path.join(upload_dir, upload_name)[len(d) + 1:]) + else: + dir_name = d + for i in range(dir_depth): + dir_name = os.path.join(dir_name, "%s_%d" % ('upload_dir', i)) + upload_dir = make_workdir(dir_name) + for j in range(0, numFilesToCreate): + upload_name = "%s_%d.dat" % ('TEST_FILE_NEW_USER_SHARE', j) + createfile(os.path.join(upload_dir, upload_name), '0', count=1000, bs=filesizeKB) + files.append(os.path.join(upload_dir, upload_name)[len(d) + 1:]) + + run_ocsync(d, user_num=user_num) + shared = reflection.getSharedObject() + shared['FILES_%i' % user_num] = files + + step(4, 'Uploader verify files are uploaded') + + for f in files: + expect_exists(os.path.join(d, f)) + expect_webdav_exist(f, user_num=user_num) + + step(5, 'Uploader final step') + + +def downloader(step): + + step(2, 'Create workdir') + d = make_workdir() + user_num = get_user_number_from_work_directory(d) + + step(5, 'Sync and validate') + run_ocsync(d, user_num=user_num) + + step(6, 'Downloader validate that all files exist') + shared = reflection.getSharedObject() + files = shared['FILES_%i' % user_num] + + error_check(len(files) == dir_depth * numFilesToCreate, 'Number of files does not match') + + for f in files: + expect_exists(os.path.join(d, f)) + +for u in range(config.oc_number_test_users): + add_worker(uploader, name="uploader%02d" % (u+1)) + add_worker(downloader, name="downloader%02d" % (u+1)) + + +def get_user_number_from_work_directory(work_dir): + """ + :param work_dir: string Path of the directory + /home/user/smashdir/test_uploadFiles-150522-111229/shareeTwo01 + :return: integer User number from the last directory name + """ + + work_dir = work_dir[len(config.rundir) + 1:] + user_num = int(re.search(r'\d+', work_dir).group()) + return user_num diff --git a/lib/owncloud/test_locking.py b/lib/owncloud/test_locking.py new file mode 100644 index 0000000..f33cb96 --- /dev/null +++ b/lib/owncloud/test_locking.py @@ -0,0 +1,245 @@ +import re + +from smashbox.owncloudorg.locking import * +from smashbox.utilities import * +import os +import signal + +__doc__ = """ + +Test locking enforcement ++------+------------------------------------+ +| Step | User | ++------+------------------------------------+ +| 2 | Enable QA testing app | +| 3 | Create dir/subdir/ | +| 4 | Populate locks | +| 5 | Try to upload dir/subdir/file2.dat | +| 6 | Remove locks | +| 7 | Upload dir/subdir/file2.dat | ++------+------------------------------------+ + +""" + + +DIR_NAME = 'dir' +SUBDIR_NAME = os.path.join(DIR_NAME, 'subdir') + +testsets = [ + { + 'locks': [ + { + 'lock': LockProvider.LOCK_EXCLUSIVE, + 'path': DIR_NAME + } + ], + 'can_upload': False + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_SHARED, + 'path': DIR_NAME + } + ], + 'can_upload': True + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_EXCLUSIVE, + 'path': SUBDIR_NAME + } + ], + 'can_upload': False + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_SHARED, + 'path': SUBDIR_NAME + } + ], + 'can_upload': True + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_EXCLUSIVE, + 'path': DIR_NAME + }, + { + 'lock': LockProvider.LOCK_SHARED, + 'path': SUBDIR_NAME + } + ], + 'can_upload': False + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_SHARED, + 'path': DIR_NAME + }, + { + 'lock': LockProvider.LOCK_EXCLUSIVE, + 'path': SUBDIR_NAME + } + ], + 'can_upload': False + }, + { + 'locks': [ + { + 'lock': LockProvider.LOCK_SHARED, + 'path': DIR_NAME + }, + { + 'lock': LockProvider.LOCK_SHARED, + 'path': SUBDIR_NAME + } + ], + 'can_upload': True + } +] + +use_locks = config.get('locks', testsets[0]['locks']) +can_upload = config.get('can_upload', testsets[0]['can_upload']) +original_cmd = config.oc_sync_cmd + + +@add_worker +def owner_worker(step): + + if compare_client_version('2.1.1', '<='): + # The client has a bug with permissions of folders on the first sync before 2.1.2 + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + if compare_oc_version('9.0', '<='): + # The server has no fake locking support + logger.warning('Skipping test, because the server has no fake locking support') + return + + oc_api = get_oc_api() + oc_api.login(config.oc_admin_user, config.oc_admin_password) + lock_provider = LockProvider(oc_api) + lock_provider.enable_testing_app() + + if not lock_provider.isUsingDBLocking(): + logger.warning('Skipping test, because DB Locking is not enabled or lock provisioning is not supported') + return + + step(2, 'Create workdir') + d = make_workdir() + + from owncloud import OCSResponseError + try: + lock_provider.unlock() + except OCSResponseError: + fatal_check(False, 'Testing App seems to not be enabled') + + step(3, 'Create test folder') + + mkdir(os.path.join(d, DIR_NAME)) + mkdir(os.path.join(d, SUBDIR_NAME)) + createfile(os.path.join(d, DIR_NAME, 'file.dat'), '0', count=1000, bs=1) + createfile(os.path.join(d, SUBDIR_NAME, 'sub_file.dat'), '0', count=1000, bs=1) + + run_ocsync(d) + + step(4, 'Lock items') + + for lock in use_locks: + fatal_check( + lock_provider.is_locked(lock['lock'], config.oc_account_name, lock['path']) == False, + 'Resource is already locked' + ) + + lock_provider.lock(lock['lock'], config.oc_account_name, lock['path']) + + fatal_check( + lock_provider.is_locked(lock['lock'], config.oc_account_name, lock['path']), + 'Resource should be locked' + ) + + step(5, 'Try to upload a file in locked item') + + createfile(os.path.join(d, SUBDIR_NAME, 'file2.dat'), '0', count=1000, bs=1) + + try: + save_run_ocsync(d, seconds=10, max_sync_retries=1) + except TimeoutError as err: + if compare_client_version('2.1.0', '>='): + # Max retries should terminate in time + error_check(False, err.message) + else: + # Client does not terminate before 2.1: https://github.com/owncloud/client/issues/4037 + logger.warning(err.message) + + if can_upload: + expect_webdav_exist(os.path.join(SUBDIR_NAME, 'file2.dat')) + else: + expect_webdav_does_not_exist(os.path.join(SUBDIR_NAME, 'file2.dat')) + + step(6, 'Unlock item and sync again') + + for lock in use_locks: + fatal_check( + lock_provider.is_locked(lock['lock'], config.oc_account_name, lock['path']), + 'Resource is already locked' + ) + + lock_provider.unlock(lock['lock'], config.oc_account_name, lock['path']) + + fatal_check( + lock_provider.is_locked(lock['lock'], config.oc_account_name, lock['path']) == False, + 'Resource should be locked' + ) + + step(7, 'Upload a file in unlocked item') + + run_ocsync(d) + + expect_webdav_exist(os.path.join(SUBDIR_NAME, 'file2.dat')) + + step(8, 'Final - Unlock everything') + + lock_provider.unlock() + lock_provider.disable_testing_app() + + +class TimeoutError(Exception): + pass + + +def handler(signum, frame): + config.oc_sync_cmd = original_cmd + raise TimeoutError('Sync client did not terminate in time') + + +def save_run_ocsync(local_folder, seconds=10, max_sync_retries=1, remote_folder="", n=None, user_num=None): + """ + A save variation of run_ocsync, that terminates after n seconds or x retries depending on the client version + + :param local_folder: The local folder to sync + :param seconds: Number of seconds until the request should be terminated + :param max_sync_retries: Number of retries for each sync + :param remote_folder: The remote target folder to sync to + :param n: Number of syncs + :param user_num: User number + """ + + if compare_client_version('2.1.0', '>='): + pattern = re.compile(r' \-\-max\-sync\-retries \d+') + config.oc_sync_cmd = pattern.sub('', config.oc_sync_cmd) + config.oc_sync_cmd += ' --max-sync-retries %i' % max_sync_retries + + signal.signal(signal.SIGALRM, handler) + signal.alarm(seconds) + + # This run_ocsync() may hang indefinitely + run_ocsync(local_folder, remote_folder, n, user_num) + + signal.alarm(0) + config.oc_sync_cmd = original_cmd diff --git a/lib/owncloud/test_moveFileStatusCode.py b/lib/owncloud/test_moveFileStatusCode.py new file mode 100644 index 0000000..fb1696e --- /dev/null +++ b/lib/owncloud/test_moveFileStatusCode.py @@ -0,0 +1,56 @@ +from owncloud import HTTPResponseError + +__doc__ = """ + +Test moving a file via webdav + +""" + +from smashbox.utilities import * + +@add_worker +def move_non_existing_file(step): + + step(1, 'Create a folder and a file') + d = make_workdir() + dir_name = os.path.join(d, 'folder') + local_dir = make_workdir(dir_name) + + createfile(os.path.join(d, 'file1.txt'), '0', count=1000, bs=50) + createfile(os.path.join(local_dir, 'file3.txt'), '1', count=1000, bs=50) + run_ocsync(d, user_num=1) + + expect_webdav_exist('file1.txt', user_num=1) + expect_webdav_does_not_exist(os.path.join('folder', 'file2.txt'), user_num=1) + expect_webdav_exist(os.path.join('folder', 'file3.txt'), user_num=1) + + step(2, 'Move the file into the folder') + + oc = get_oc_api() + oc.login("%s%i" % (config.oc_account_name, 1), config.oc_account_password) + + try: + oc.move('file1.txt', os.path.join('folder', 'file2.txt')) + except HTTPResponseError as err: + error_check( + False, + 'Server replied with status code: %i' % err.status_code + ) + + expect_webdav_does_not_exist('file1.txt', user_num=1) + expect_webdav_exist(os.path.join('folder', 'file2.txt'), user_num=1) + expect_webdav_exist(os.path.join('folder', 'file3.txt'), user_num=1) + + step(3, 'Move non existing file into the folder') + + try: + oc.move('file1.txt', os.path.join('folder', 'file2.txt')) + except HTTPResponseError as err: + error_check( + err.status_code == 404, + 'Server replied with status code: %i' % err.status_code + ) + + expect_webdav_does_not_exist('file1.txt', user_num=1) + expect_webdav_exist(os.path.join('folder', 'file2.txt'), user_num=1) + expect_webdav_exist(os.path.join('folder', 'file3.txt'), user_num=1) diff --git a/lib/owncloud/test_moveFilesTwice.py b/lib/owncloud/test_moveFilesTwice.py new file mode 100644 index 0000000..babfa2f --- /dev/null +++ b/lib/owncloud/test_moveFilesTwice.py @@ -0,0 +1,106 @@ + +__doc__ = """ + +This test is testing that moving files server multiple do not get uploaded +too much. Even if the files are moved when the sync is running. +[https://github.com/owncloud/client/issues/4370] + +""" + +from smashbox.utilities import * +import subprocess + +nfiles = 20 +TEST_FILES = ['test%02d.dat'%i for i in range(nfiles)] + +def getFileId(syncdir, fileName): + return subprocess.check_output(["sqlite3" , syncdir + "/.csync_journal.db", + "select fileid from metadata where path = \"" + fileName + "\""]) + +@add_worker +def workerA(step): + if compare_client_version('2.1.0', '<='): + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + #cleanup remote and local test environment - this should be run once by one worker only + reset_owncloud_account() + reset_rundir() + + syncdir = make_workdir("workdir") + d1 = os.path.join(syncdir,"dir1") + d2 = os.path.join(syncdir,"dir2") + d_final = os.path.join(syncdir,"dirFinal") + + step(0,'create initial content and sync') + + + # create a folder and some files in it + mkdir(d1) + + for f in TEST_FILES: + fn = os.path.join(d1,f) + createfile(fn,'0',count=1000,bs=1000) + + run_ocsync(syncdir) + + fileIds = list(map((lambda f:getFileId(syncdir, 'dir1/' + f)), TEST_FILES)) + + step(1,'move the folder') + + mkdir(d2) + mv(d1+"/*",d2) + + step(2, 'sync') + run_ocsync(syncdir) + + step(3,'final sync') + + run_ocsync(syncdir) + + final_fileIds = list(map((lambda f:getFileId(syncdir, 'dirFinal/' + f)), TEST_FILES)) + + #The file ids needs to stay the same for every files, since they only got moved and not re-uploaded + error_check(fileIds == final_fileIds, "File id differ (%s != %s)" % (fileIds, final_fileIds)) + + +@add_worker +def workerB(step): + + if compare_client_version('2.1.0', '<='): + logger.warning('Skipping test, because the client version is known to behave incorrectly') + return + + step(2,'move the folder during the sync') + + syncdir = make_workdir("workdir") + d1 = os.path.join(syncdir,"dir1") + d2 = os.path.join(syncdir,"dir2") + d3 = os.path.join(syncdir,"dir3") + d4 = os.path.join(syncdir,"dir4") + d5 = os.path.join(syncdir,"dir5") + d6 = os.path.join(syncdir,"dir6") + d_final = os.path.join(syncdir,"dirFinal") + + #Do it several time with one second interval to be sure we do it at lease once + # during the propagation phase + sleep(1) + mkdir(d3) + mv(d2+"/*",d3) + + sleep(1) + mkdir(d4) + mv(d3+"/*",d4) + + sleep(1) + mkdir(d5) + mv(d4+"/*",d5) + + sleep(1) + mkdir(d6) + mv(d5+"/*",d6) + + sleep(1) + mkdir(d_final) + mv(d6+"/*",d_final) + diff --git a/lib/owncloud/test_remoteShareFile.py b/lib/owncloud/test_remoteShareFile.py new file mode 100644 index 0000000..c0ae8ca --- /dev/null +++ b/lib/owncloud/test_remoteShareFile.py @@ -0,0 +1,284 @@ + +__doc__ = """ + +Test basic file remote-sharing between users. + ++-----------+----------------------+------------------+----------------------------+ +| Step | Sharer | Sharee One | Sharee Two | +| Number | | | | ++===========+======================+==================+============================| +| 2 | create work dir | create work dir | create work dir | ++-----------+----------------------+------------------+----------------------------+ +| 3 | Create test files | | | ++-----------+----------------------+------------------+----------------------------+ +| 4 | Shares files with | | | +| | Sharee One | | | ++-----------+----------------------+------------------+----------------------------+ +| 5 | | Syncs and | | +| | | validates files | | ++-----------+----------------------+------------------+----------------------------+ +| 6 | | modifies one | | +| | | files, if | | +| | | permitted | | ++-----------+----------------------+------------------+----------------------------+ +| 7 | Validates modified | | | +| | file or not, based | | | +| | on permissions | | | ++-----------+----------------------+------------------+----------------------------+ +| 8 | | Shares a file | | +| | | with sharee two | | +| | | if permitted | | ++-----------+----------------------+------------------+----------------------------+ +| 9 | | | Syncs and validates | +| | | |file is shared if permitted | ++-----------+----------------------+------------------+----------------------------+ +| 10 | Sharer unshares a | | | +| | file | | | ++-----------+----------------------+------------------+----------------------------+ +| 11 | | Syncs and | Syncs and validates | +| | | validates file | file not present | +| | | not present | | ++-----------+----------------------+------------------+----------------------------+ +| 12 | Sharer deletes a | | | +| | file | | | ++-----------+----------------------+------------------+----------------------------+ +| 13 | | Syncs and | | +| | | validates file | | +| | | not present | | ++-----------+----------------------+------------------+----------------------------+ +| 14 | Final step | Final step | Final Step | ++-----------+----------------------+------------------+----------------------------+ + + +Data Providers: + + test_sharePermissions: Permissions to be applied to the share + +""" + +from smashbox.utilities import * +from smashbox.owncloudorg.remote_sharing import * + +OCS_PERMISSION_READ = 1 +OCS_PERMISSION_UPDATE = 2 +OCS_PERMISSION_CREATE = 4 +OCS_PERMISSION_DELETE = 8 +OCS_PERMISSION_SHARE = 16 +OCS_PERMISSION_ALL = 31 + +filesizeKB = int(config.get('share_filesizeKB', 10)) +sharePermissions = int(config.get('test_sharePermissions', OCS_PERMISSION_ALL)) + +testsets = [ + { + 'test_sharePermissions': OCS_PERMISSION_ALL + }, + { + 'test_sharePermissions': OCS_PERMISSION_READ | OCS_PERMISSION_UPDATE + }, + { + 'test_sharePermissions': OCS_PERMISSION_READ | OCS_PERMISSION_SHARE + } +] + + +@add_worker +def sharer(step): + + step(2, 'Create workdir') + d = make_workdir() + + step(3, 'Create initial test files and directories') + + createfile(os.path.join(d, 'TEST_FILE_USER_SHARE.dat'), '0', count=1000, bs=filesizeKB) + createfile(os.path.join(d, 'TEST_FILE_USER_RESHARE.dat'), '0', count=1000, bs=filesizeKB) + createfile(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat'), '0', count=1000, bs=filesizeKB) + + shared = reflection.getSharedObject() + shared['md5_sharer'] = md5sum(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat')) + logger.info('md5_sharer: %s', shared['md5_sharer']) + + list_files(d) + run_ocsync(d, user_num=1) + list_files(d) + + step(4, 'Sharer shares files') + + user1 = "%s%i" % (config.oc_account_name, 1) + user2 = "%s%i" % (config.oc_account_name, 2) + + kwargs = {'perms': sharePermissions} + shared['TEST_FILE_USER_SHARE'] = remote_share_file_with_user( + 'TEST_FILE_USER_SHARE.dat', user1, user2, **kwargs + ) + shared['TEST_FILE_USER_RESHARE'] = remote_share_file_with_user( + 'TEST_FILE_USER_RESHARE.dat', user1, user2, **kwargs + ) + shared['TEST_FILE_MODIFIED_USER_SHARE'] = remote_share_file_with_user( + 'TEST_FILE_MODIFIED_USER_SHARE.dat', user1, user2, **kwargs + ) + shared['sharer.TEST_FILE_MODIFIED_USER_SHARE'] = os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat') + + step(7, 'Sharer validates modified file') + run_ocsync(d, user_num=1) + + if not sharePermissions & OCS_PERMISSION_UPDATE: + expect_not_modified(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat'), shared['md5_sharer']) + else: + expect_modified(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat'), shared['md5_sharer']) + + step(10, 'Sharer unshares a file') + delete_share(user1, shared['TEST_FILE_USER_RESHARE']) + + step(12, 'Sharer deletes file') + + list_files(d) + remove_file(os.path.join(d, 'TEST_FILE_USER_SHARE.dat')) + run_ocsync(d, user_num=1) + list_files(d) + + step(14, 'Sharer final step') + + +@add_worker +def sharee_one(step): + + step(2, 'Sharee One creates workdir') + d = make_workdir() + + step(5, 'Sharee One syncs and validate files exist') + + run_ocsync(d, user_num=2) + list_files(d) + + # Accept the remote shares for user2 + user2 = "%s%i" % (config.oc_account_name, 2) + openShares = list_open_remote_share(user2) + for share in openShares: + accept_remote_share(user2, int(share['id'])) + sleep(5) + + run_ocsync(d, user_num=2) + list_files(d) + + shared_file = os.path.join(d, 'TEST_FILE_USER_SHARE.dat') + logger.info('Checking that %s is present in local directory for Sharee One', shared_file) + expect_exists(shared_file) + + shared_file = os.path.join(d, 'TEST_FILE_USER_RESHARE.dat') + logger.info('Checking that %s is present in local directory for Sharee One', shared_file) + expect_exists(shared_file) + + shared_file = os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat') + logger.info('Checking that %s is present in local directory for Sharee One', shared_file) + expect_exists(shared_file) + + step(6, 'Sharee One modifies TEST_FILE_MODIFIED_USER_SHARE.dat') + + modify_file(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat'), '1', count=10, bs=filesizeKB) + run_ocsync(d, user_num=2) + list_files(d) + + shared = reflection.getSharedObject() + if not sharePermissions & OCS_PERMISSION_UPDATE: + # local file is modified, but not synced so the owner still has the right file + list_files(d) + expect_modified(os.path.join(d, 'TEST_FILE_MODIFIED_USER_SHARE.dat'), shared['md5_sharer']) + expect_not_modified(shared['sharer.TEST_FILE_MODIFIED_USER_SHARE'], shared['md5_sharer']) + + step(8, 'Sharee One share files with user 3') + + user2 = "%s%i" % (config.oc_account_name, 2) + user3 = "%s%i" % (config.oc_account_name, 3) + kwargs = {'perms': sharePermissions} + result = remote_share_file_with_user('TEST_FILE_USER_RESHARE.dat', user2, user3, **kwargs) + + # FIXME Remote sharing ignores the share permission for now, so sharing should always work: + # FIXME https://github.com/owncloud/core/issues/22495 + # if not sharePermissions & OCS_PERMISSION_SHARE: + # error_check(result != -1, "An error should have occurred while sharing the file, but it worked") + # else: + # error_check(result != -1, "An error occurred while sharing the file") + error_check(result != -1, "An error occurred while sharing the file") + + step(11, 'Sharee one validates file does not exist after unsharing') + + run_ocsync(d, user_num=2) + list_files(d) + + shared_file = os.path.join(d, 'TEST_FILE_USER_RESHARE.dat') + logger.info('Checking that %s is not present in sharee local directory', shared_file) + expect_does_not_exist(shared_file) + + step(13, 'Sharee syncs and validates file does not exist') + + run_ocsync(d, user_num=2) + list_files(d) + + # May seem weird, but that is the current behaviour. The file is still there locally, + # but on the server the entry is a StorageNotAvailable exception, so webdav exists should not pass. + shared_file = os.path.join(d, 'TEST_FILE_USER_SHARE.dat') + logger.info('Checking that %s is present in sharee locally but not on webdav directory', shared_file) + expect_exists(shared_file) + expect_webdav_does_not_exist(shared_file, user_num=2) + + step(14, 'Sharee One final step') + + +@add_worker +def sharee_two(step): + + step(2, 'Sharee Two creates workdir') + d = make_workdir() + + step(9, 'Sharee two validates share file') + + run_ocsync(d, user_num=3) + list_files(d) + + # Accept the remote shares for user3 + user3 = "%s%i" % (config.oc_account_name, 3) + openShares = list_open_remote_share(user3) + for share in openShares: + accept_remote_share(user3, int(share['id'])) + + run_ocsync(d, user_num=3) + list_files(d) + + shared_file = os.path.join(d, 'TEST_FILE_USER_RESHARE.dat') + + # FIXME Remote sharing ignores the share permission for now, so sharing should always work: + # FIXME https://github.com/owncloud/core/issues/22495 + # if not sharePermissions & OCS_PERMISSION_SHARE: + # logger.info('Checking that %s is not present in local directory for Sharee Two', shared_file) + # expect_does_not_exist(shared_file) + # else: + # logger.info('Checking that %s is present in local directory for Sharee Two', shared_file) + # expect_exists(shared_file) + logger.info('Checking that %s is present in local directory for Sharee Two', shared_file) + expect_exists(shared_file) + + step(11, 'Sharee two validates file does not exist after unsharing') + + run_ocsync(d, user_num=3) + list_files(d) + + shared_file = os.path.join(d, 'TEST_FILE_USER_RESHARE.dat') + + # FIXME Remote sharing ignores the share permission for now, so sharing should always work: + # FIXME https://github.com/owncloud/core/issues/22495 + # if not sharePermissions & OCS_PERMISSION_SHARE: + # logger.info('Checking that %s is not present in sharee locally or the webdav directory', shared_file) + # expect_does_not_exist(shared_file) + # expect_webdav_does_not_exist(shared_file, user_num=3) + # else: + # # May seem weird, but that is the current behaviour. The file is still there locally, + # # but on the server the entry is a StorageNotAvailable exception, so webdav exists should not pass. + # logger.info('Checking that %s is present in sharee locally but not on webdav directory', shared_file) + # expect_exists(shared_file) + # expect_webdav_does_not_exist(shared_file, user_num=3) + logger.info('Checking that %s is present in sharee locally but not on webdav directory', shared_file) + expect_exists(shared_file) + expect_webdav_does_not_exist(shared_file, user_num=3) + + step(14, 'Sharee Two final step') diff --git a/lib/owncloud/test_shareMountInit.py b/lib/owncloud/test_shareMountInit.py new file mode 100644 index 0000000..73ff565 --- /dev/null +++ b/lib/owncloud/test_shareMountInit.py @@ -0,0 +1,478 @@ +__doc__ = """ + +This test is oriented to test share mount initialization in most sharing cases. It checks for correct files propagation in +scenarios where user is receiving shared/reshared files and folders via group/user shares. + ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| step | owner | ownerRecipient | R2 | R3 | R4 | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 2 | create dir | | create dir | create dir | create dir | +| | share /test1 | | | | | +| | -> R2 R3 (group)| | | | | +| | | | | | | +| | share /test2 | | | | | +| | -> R4 (group) | | | | | +| | | | | | | +| | share test1.txt | | | | | +| | -> R3 (user) | | | | | +| | | | | | | +| | share test2.txt | | | | | +| | -> R4 (user) | | | | | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 3 | | | reshare directory | reshare file | | +| | | | -> R4 | -> R4 | | +| | | | /test1/sub | test1.txt | | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 4 | sync&check | sync&check | sync&check | sync&check | do noting yet | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 5 | upload to | | upload to | change | | +| | -> /test1 | | -> /test1/sub | -> test1.txt | | +| | -> /test2 | | | | | +| | | | | | | +| | change | | | | | +| | -> test2.txt | | | | | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 6 | sync&check | sync&check | sync&check | sync&check | | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 7 | | | | | sync&check | +| | | | | | (initMounts) | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 8 | | | | | create dir | +| | | | | | -> shared | +| | | | | | -> reshared | +| | | | | | | +| | | | | | move shared | +| | | | | | -> shared | +| | | | | | | +| | | | | | move reshared | +| | | | | | -> reshared | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 9 | sync&check | sync&check | sync&check | sync&check | | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +| 10 | | | | | resync&check | ++-------+-----------------+----------------+-------------------+--------------+-----------------+ +""" +from smashbox.utilities import * +import itertools +import os.path +import re +import operator as op + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def get_group_name(i): + return '%s%i' % (config.oc_group_name, i) + +def get_account_name(i): + return '%s%i' % (config.oc_account_name, i) + +group_map = { + # maps the group name with the usernum belonging to the group + get_group_name(1) : [2,3], + get_group_name(2) : [4,5], + get_group_name(3) : [6,7], + get_group_name(4) : [8,9], +} + +def run_group_ocsync(d, group_name): + for usernum in group_map[group_name]: + run_ocsync(os.path.join(d, str(usernum)), user_num=usernum, use_new_dav_endpoint=use_new_dav_endpoint) + +@add_worker +def setup(step): + + step(1, 'create test users') + num_users = 9 + + # Create additional accounts + if config.oc_number_test_users < num_users: + for i in range(config.oc_number_test_users + 1, num_users + 1): + username = "%s%i" % (config.oc_account_name, i) + delete_owncloud_account(username) + create_owncloud_account(username, config.oc_account_password) + login_owncloud_account(username, config.oc_account_password) + + check_users(num_users) + reset_owncloud_group(num_groups=4) + + for group in group_map: + for user in group_map[group]: + add_user_to_group(get_account_name(user), group) + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def owner(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 1) + + step (2, 'Create workdir') + d = make_workdir() + + mkdir(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + mkdir(os.path.join(d, 'TEST_SHARE_FOLDER')) + + shared = reflection.getSharedObject() + createfile(os.path.join(d, 'TEST_RESHARE_FILE.dat'), '00', count=100, bs=10) + shared['TEST_RESHARE_FILE'] = md5sum(os.path.join(d, 'TEST_RESHARE_FILE.dat')) + + createfile(os.path.join(d, 'TEST_SHARE_FILE.dat'), '01', count=100, bs=10) + shared['TEST_SHARE_FILE'] = md5sum(os.path.join(d, 'TEST_SHARE_FILE.dat')) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(user, config.oc_account_password) + + # Share with Group users R2 and R3 + group1 = get_group_name(1) + share1_data = client.share_file_with_group('/TEST_RESHARE_SUBFOLDER', group1, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group1,)) + + user3 = '%s%i' % (config.oc_account_name, 3) + share1_data = client.share_file_with_user('/TEST_RESHARE_FILE.dat', user3, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group1,)) + + # Share with Group user R4 + group2 = get_group_name(2) + share2_data = client.share_file_with_group('/TEST_SHARE_FOLDER', group2, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group2,)) + + user4 = '%s%i' % (config.oc_account_name, 4) + share2_data = client.share_file_with_user('/TEST_SHARE_FILE.dat', user4, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group2,)) + + step(5, 'Upload files to TEST_SHARE_FOLDER, TEST_RESHARE_SUBFOLDER and change TEST_SHARE_FILE') + + createfile(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), '02', count=100, bs=10) + shared['TEST_SHARE_FOLDER_FILE'] = md5sum(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat')) + + createfile(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), '03', count=100, bs=10) + shared['TEST_RESHARE_SUBFOLDER_FILE'] = md5sum(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat')) + + modify_file(os.path.join(d,'TEST_SHARE_FILE.dat'),'11',count=100,bs=10) + shared['TEST_SHARE_FILE'] = md5sum(os.path.join(d, 'TEST_SHARE_FILE.dat')) + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + step(6, 'Resync and check') + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_exists(os.path.join(d, 'TEST_SHARE_FOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), shared['TEST_SHARE_FOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + step(8, 'Resync and check') + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_exists(os.path.join(d, 'TEST_SHARE_FOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), shared['TEST_SHARE_FOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + +@add_worker +def ownerRecipient(step): + if finish_if_not_capable(): + return + + step (2, 'Create workdir') + d = make_workdir() + + step (4, 'Sync and check required files') + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folder has been synced down + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + + # Check that file has been synced down + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + step(6, 'Resync and check') + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_exists(os.path.join(d, 'TEST_SHARE_FOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), + shared['TEST_SHARE_FOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + step(8, 'Resync and check') + + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_exists(os.path.join(d, 'TEST_SHARE_FOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), + shared['TEST_SHARE_FOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + +@add_worker +def recipient2(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 2) + + step (2, 'Create workdir') + d = make_workdir() + + group2 = get_group_name(2) + step(3, 'Reshare /TEST_RESHARE_SUBFOLDER/SUB with %s' % (group2)) + + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(user, config.oc_account_password) + # only the first user of the group shares with another group, to keep it simple + share1_data = client.share_file_with_group('/TEST_RESHARE_SUBFOLDER/SUB', group2, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group2)) + + step(4, 'Sync and check required files') + + run_ocsync(d, user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that shared folder exists + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + + step(5, 'Upload files to TEST_RESHARE_SUBFOLDER/SUB') + + shared = reflection.getSharedObject() + createfile(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), '04', count=100, bs=10) + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE'] = md5sum( + os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat')) + + run_ocsync(d, user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) + + step(6, 'Resync and check') + + run_ocsync(d, user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FILE.dat')) + expect_does_not_exist(os.path.join(d, 'TEST_RESHARE_FILE.dat')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + step(8, 'Resync and check') + + run_ocsync(d, user_num=2, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FILE.dat')) + expect_does_not_exist(os.path.join(d, 'TEST_RESHARE_FILE.dat')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + +@add_worker +def recipient3(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 3) + + step (2, 'Create workdir') + d = make_workdir() + + group2 = get_group_name(2) + step(3, 'Reshare /TEST_RESHARE_FILE.dat with %s' % (group2)) + + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(user, config.oc_account_password) + # only the first user of the group shares with another group, to keep it simple + share1_data = client.share_file_with_group('/TEST_RESHARE_FILE.dat', group2, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group2)) + + + step(4, 'Sync and check required files') + + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that shared folder exists + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + + # Check that shared file exist + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + + step(5, 'Change file TEST_RESHARE_FILE') + + modify_file(os.path.join(d, 'TEST_RESHARE_FILE.dat'), '10', count=100, bs=10) + shared['TEST_RESHARE_FILE'] = md5sum( + os.path.join(d, 'TEST_RESHARE_FILE.dat')) + + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) + + step(6, 'Resync and check') + + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FILE.dat')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + + step(8, 'Resync and check') + + run_ocsync(d, user_num=3, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_SHARE_FILE.dat')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'TEST_RESHARE_SUBFOLDER_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_SUBFOLDER', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + +@add_worker +def recipient4(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 4) + + step (2, 'Create workdir') + d = make_workdir() + + step(6, 'Initialize share mounts (sync and check)') + + run_ocsync(d, user_num=4, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'SUB')) + expect_exists(os.path.join(d, 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_RESHARE_SUBFOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), + shared['TEST_SHARE_FOLDER_FILE']) + + expect_not_modified(os.path.join(d, 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + + expect_not_modified(os.path.join(d, 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) + + step(8, 'Create SHARED_ITEMS and RESHARED_ITEMS folder and move relevant files there') + + mkdir(os.path.join(d, 'SHARED_ITEMS')) + mkdir(os.path.join(d, 'RESHARED_ITEMS')) + + mv(os.path.join(d, 'TEST_SHARE_FOLDER'), (os.path.join(d, 'SHARED_ITEMS'))) + mv(os.path.join(d, 'TEST_SHARE_FILE.dat'), (os.path.join(d, 'SHARED_ITEMS'))) + + mv(os.path.join(d, 'TEST_RESHARE_FILE.dat'), (os.path.join(d, 'RESHARED_ITEMS'))) + mv(os.path.join(d, 'SUB'), (os.path.join(d, 'RESHARED_ITEMS'))) + + run_ocsync(d, user_num=4, use_new_dav_endpoint=use_new_dav_endpoint) + + step(10, 'Sync and check') + + run_ocsync(d, user_num=4, use_new_dav_endpoint=use_new_dav_endpoint) + + # Check that folders have been synced down correctly + expect_exists(os.path.join(d, 'RESHARED_ITEMS', 'SUB')) + expect_exists(os.path.join(d, 'SHARED_ITEMS', 'TEST_SHARE_FOLDER')) + expect_does_not_exist(os.path.join(d, 'TEST_RESHARE_SUBFOLDER')) + + # Check that files have been synced down correctly + shared = reflection.getSharedObject() + + expect_not_modified(os.path.join(d, 'SHARED_ITEMS', 'TEST_SHARE_FOLDER', 'TEST_SHARE_FOLDER_FILE.dat'), + shared['TEST_SHARE_FOLDER_FILE']) + + expect_not_modified(os.path.join(d, 'RESHARED_ITEMS', 'SUB', 'TEST_RESHARE_SUBFOLDER_SUB_FILE.dat'), + shared['TEST_RESHARE_SUBFOLDER_SUB_FILE']) + + expect_not_modified(os.path.join(d, 'RESHARED_ITEMS', 'TEST_RESHARE_FILE.dat'), shared['TEST_RESHARE_FILE']) + + expect_not_modified(os.path.join(d, 'SHARED_ITEMS', 'TEST_SHARE_FILE.dat'), shared['TEST_SHARE_FILE']) diff --git a/lib/owncloud/test_sharePropagation.py b/lib/owncloud/test_sharePropagation.py new file mode 100644 index 0000000..3f5e6b8 --- /dev/null +++ b/lib/owncloud/test_sharePropagation.py @@ -0,0 +1,257 @@ +__doc__ = """ +Test share etag propagation + ++-------------+-------------------------+-------------------------+----------------------+ +| step number | owner | R2 R3 | R4 | ++-------------+-------------------------+-------------------------+----------------------+ +| 2 | create working dir | create working dir | create working dir | +| | share folder with R2 R3 | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 3 | sync | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 4 | verify propagation | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 5 | | upload in shared dir | | ++-------------+-------------------------+-------------------------+----------------------+ +| 6 | verify propagation | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 7 | unshare folder | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 8 | verify etag is the same | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 9 | share folder with R2 R3 | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 10 | | R2 reshare with R4 | | ++-------------+-------------------------+-------------------------+----------------------+ +| 11 | verify etag is the same | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 12 | | R2 upload in shared dir | | ++-------------+-------------------------+-------------------------+----------------------+ +| 13 | verify propagation | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 14 | | | upload in shared dir | ++-------------+-------------------------+-------------------------+----------------------+ +| 15 | verify propagation | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 16 | | R2 unshares folder | | ++-------------+-------------------------+-------------------------+----------------------+ +| 17 | verify etag is the same | verify etag is the same | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +""" + +from smashbox.utilities import * +import itertools +import os.path +import re + +def parse_worker_number(worker_name): + match = re.search(r'(\d+)$', worker_name) + if match is not None: + return int(match.group()) + else: + return None + +@add_worker +def setup(step): + + step(1, 'create test users') + + num_users = 4 + + # Create additional accounts + if config.oc_number_test_users < num_users: + for i in range(config.oc_number_test_users + 1, num_users + 1): + username = "%s%i" % (config.oc_account_name, i) + delete_owncloud_account(username) + create_owncloud_account(username, config.oc_account_password) + login_owncloud_account(username, config.oc_account_password) + + check_users(num_users) + +@add_worker +def owner(step): + + user = '%s%i' % (config.oc_account_name, 1) + + step (2, 'Create workdir') + d = make_workdir() + + mkdir(os.path.join(d, 'test', 'sub')) + run_ocsync(d, user_num=1) + + client = get_oc_api() + client.login(user, config.oc_account_password) + # make sure folder is shared + user2 = '%s%i' % (config.oc_account_name, 2) + share1_data = client.share_file_with_user('/test', user2, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (user2,)) + + user3 = '%s%i' % (config.oc_account_name, 3) + share2_data = client.share_file_with_user('/test', user3, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (user3,)) + + root_etag = client.file_info('/').get_etag() + + step(3, 'Upload file') + createfile(os.path.join(d, 'test', 'test.txt'), '1', count=1000, bs=10) + run_ocsync(d, user_num=1) + + step(4, 'Verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads /test/test.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(6, 'verify another etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipients upload to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(7, 'unshare') + client.delete_share(share1_data.share_id) + client.delete_share(share2_data.share_id) + + step(8, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + error_check(root_etag3 == root_etag4, 'owner unshares ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + + step(9, 'share again the files') + share1_data = client.share_file_with_user('/test', user2, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (user2,)) + share2_data = client.share_file_with_user('/test', user3, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (user3,)) + + step(11, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + error_check(root_etag4 == root_etag5, 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + + step(13, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + + step(15, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + error_check(root_etag6 != root_etag7, 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + + step(17, 'verify etag is the same') + root_etag8 = client.file_info('/').get_etag() + # It shoudn't be propagated here in this case + error_check(root_etag7 == root_etag8, 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag7, root_etag8)) + +def recipients(step): + + usernum = parse_worker_number(reflection.getProcessName()) + user = '%s%i' % (config.oc_account_name, usernum) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=usernum) + + client = get_oc_api() + client.login(user, config.oc_account_password) + root_etag = client.file_info('/').get_etag() + + step(4, 'verify etag propagation') + run_ocsync(d, user_num=usernum) + + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads /test/test.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(5, 'upload to shared folder') + if usernum is 2: + createfile(os.path.join(d, 'test', 'test2.txt'), '2', count=1000, bs=10) + run_ocsync(d, user_num=usernum) + + step(6, 'verify another etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipients upload to /test/test2.txt' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(8, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + error_check(root_etag3 != root_etag4, 'owner unshares ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + + step(10, 'reshare file') + if usernum is 2: + user4 = '%s%i' % (config.oc_account_name, 4) + share_data = client.share_file_with_user('/test', user4, perms=31) + + step(11, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + + step(12, 'recipient 2 upload a file') + if usernum is 2: + createfile(os.path.join(d, 'test', 'test3.txt'), '3', count=1000, bs=10) + run_ocsync(d, user_num=usernum) + + step(13, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + + step(15, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + error_check(root_etag6 != root_etag7, 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + + step(16, 'unshare file') + if usernum is 2: + client.delete_share(share_data.share_id) + + step(17, 'verify etag propagation') + root_etag8 = client.file_info('/').get_etag() + # recipients 2 and 3 aren't affected by the unshare + error_check(root_etag7 == root_etag8, 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag7, root_etag8)) + +@add_worker +def recipient_4(step): + usernum = parse_worker_number(reflection.getProcessName()) + user = '%s%i' % (config.oc_account_name, usernum) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=usernum) + + client = get_oc_api() + client.login(user, config.oc_account_password) + root_etag = client.file_info('/').get_etag() + + step(11, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + error_check(root_etag != root_etag5, 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag5)) + + step(13, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + + step(14, 'upload file') + run_ocsync(d, user_num=usernum) + createfile(os.path.join(d, 'test', 'test4.txt'), '4', count=1000, bs=10) + run_ocsync(d, user_num=usernum) + + step(15, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + error_check(root_etag6 != root_etag7, 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + + step(17, 'verify etag propagation') + root_etag8 = client.file_info('/').get_etag() + error_check(root_etag7 != root_etag8, 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag7, root_etag8)) + +for i in range(2,4): + add_worker(recipients, name='recipient_%s' % (i,)) + diff --git a/lib/owncloud/test_sharePropagationGroups.py b/lib/owncloud/test_sharePropagationGroups.py new file mode 100644 index 0000000..7a719a6 --- /dev/null +++ b/lib/owncloud/test_sharePropagationGroups.py @@ -0,0 +1,350 @@ +__doc__ = """ +Test share etag propagation + ++-------------+-------------------------+-------------------------+----------------------+ +| step number | owner | R2 R3 | R4 | ++-------------+-------------------------+-------------------------+----------------------+ +| 2 | create working dir | create working dir | create working dir | +| | share folder with R2 R3 | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 3 | sync | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 4 | verify propagation | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 5 | | upload in shared dir | | ++-------------+-------------------------+-------------------------+----------------------+ +| 6 | verify propagation | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 7 | unshare folder | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 8 | verify etag is the same | verify propagation | | ++-------------+-------------------------+-------------------------+----------------------+ +| 9 | share folder with R2 R3 | | | ++-------------+-------------------------+-------------------------+----------------------+ +| 10 | | R2 reshare with R4 | | ++-------------+-------------------------+-------------------------+----------------------+ +| 11 | verify etag is the same | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 12 | | R2 upload in shared dir | | ++-------------+-------------------------+-------------------------+----------------------+ +| 13 | verify propagation | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 14 | | | upload in shared dir | ++-------------+-------------------------+-------------------------+----------------------+ +| 15 | verify propagation | verify propagation | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +| 16 | | R2 unshares folder | | ++-------------+-------------------------+-------------------------+----------------------+ +| 17 | verify etag is the same | verify etag is the same | verify propagation | ++-------------+-------------------------+-------------------------+----------------------+ +""" + +from smashbox.utilities import * +import itertools +import os.path +import re +import operator as op + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def get_group_name(i): + return '%s%i' % (config.oc_group_name, i) + +def get_account_name(i): + return '%s%i' % (config.oc_account_name, i) + +group_map = { + # maps the group name with the usernum belonging to the group + get_group_name(1) : [2,3], + get_group_name(2) : [4,5], + get_group_name(3) : [6,7], +} + +def compare_list(list1, list2, func): + """ + Compare the list item by item using the function func. If func returns False, compare list + will return False + """ + if len(list1) != len(list2): + return False + + for index in range(0, len(list1)): + if not func(list1[index], list2[index]): + return False + return True + +def get_client_etags(clients): + new_etags = [] + for client in clients: + new_etags.append(client.file_info('/').get_etag()) + + return new_etags + +def run_group_ocsync(d, group_name): + for usernum in group_map[group_name]: + run_ocsync(os.path.join(d, str(usernum)), user_num=usernum, use_new_dav_endpoint=use_new_dav_endpoint) + +def parse_worker_number(worker_name): + match = re.search(r'(\d+)$', worker_name) + if match is not None: + return int(match.group()) + else: + return None + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def setup(step): + if finish_if_not_capable(): + return + + step(1, 'create test users') + num_users = 7 + + # Create additional accounts + if config.oc_number_test_users < num_users: + for i in range(config.oc_number_test_users + 1, num_users + 1): + username = "%s%i" % (config.oc_account_name, i) + delete_owncloud_account(username) + create_owncloud_account(username, config.oc_account_password) + login_owncloud_account(username, config.oc_account_password) + + check_users(num_users) + reset_owncloud_group(num_groups=3) + + for group in group_map: + for user in group_map[group]: + add_user_to_group(get_account_name(user), group) + +@add_worker +def owner(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 1) + + step (2, 'Create workdir') + d = make_workdir() + + mkdir(os.path.join(d, 'test', 'sub')) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(user, config.oc_account_password) + # make sure folder is shared + group1 = get_group_name(1) + share1_data = client.share_file_with_group('/test', group1, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group1,)) + + group2 = get_group_name(2) + share2_data = client.share_file_with_group('/test', group2, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group2,)) + + root_etag = client.file_info('/').get_etag() + + step(3, 'Upload file') + createfile(os.path.join(d, 'test', 'test.txt'), '1', count=1000, bs=10) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + step(4, 'Verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads /test/test.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(6, 'verify another etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipients upload to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(7, 'unshare') + client.delete_share(share1_data.share_id) + client.delete_share(share2_data.share_id) + + step(8, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + error_check(root_etag3 == root_etag4, 'owner unshares ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + + step(9, 'share again the files') + share1_data = client.share_file_with_group('/test', group1, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group1,)) + share2_data = client.share_file_with_group('/test', group2, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group2,)) + + step(11, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + error_check(root_etag4 == root_etag5, 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + + step(13, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + + step(15, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + error_check(root_etag6 != root_etag7, 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + + step(17, 'verify etag is the same') + root_etag8 = client.file_info('/').get_etag() + # It shoudn't be propagated here in this case + error_check(root_etag7 == root_etag8, 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag7, root_etag8)) + +def recipients(step): + if finish_if_not_capable(): + return + + groupnum = parse_worker_number(reflection.getProcessName()) + group = get_group_name(groupnum) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + root_etags = get_client_etags(clients) + + step(4, 'verify etag propagation') + run_group_ocsync(d, group) + + root_etags2 = get_client_etags(clients) + error_check(compare_list(root_etags, root_etags2, op.ne), 'owner uploads /test/test.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags2)) + + step(5, 'upload to shared folder') + # Create a file just in one of the users of the group + if groupnum is 1: + createfile(os.path.join(d, str(group_map[group][0]), 'test', 'test2.txt'), '2', count=1000, bs=10) + # the group sync is done sequentially so there shouldn't be issues syncing + run_group_ocsync(d, group) + + step(6, 'verify another etag propagation') + if groupnum is not 1: + run_group_ocsync(d, group) + root_etags3 = get_client_etags(clients) + error_check(compare_list(root_etags2, root_etags3, op.ne), 'recipients upload to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags2, root_etags3)) + + step(8, 'verify etag propagation') + root_etags4 = get_client_etags(clients) + error_check(compare_list(root_etags3, root_etags4, op.ne), 'owner unshares ' + 'etag for / previous [%s] new [%s]' % (root_etags3, root_etags4)) + + step(10, 'reshare file') + if groupnum is 1: + # first user of the group1 reshares /test to group + share_data = clients[0].share_file_with_group('/test', get_group_name(3), perms=31) + + step(11, 'verify etag propagation') + root_etags5 = get_client_etags(clients) + error_check(compare_list(root_etags4, root_etags5, op.ne), 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etags4, root_etags5)) + + step(12, 'recipient 2 upload a file') + if groupnum is 1: + createfile(os.path.join(d, str(group_map[group][0]), 'test', 'test3.txt'), '3', count=1000, bs=10) + run_group_ocsync(d, group) + + step(13, 'verify etag propagation') + if groupnum is not 1: + run_group_ocsync(d, group) + root_etags6 = get_client_etags(clients) + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + + step(15, 'verify etag propagation') + root_etags7 = get_client_etags(clients) + error_check(compare_list(root_etags6, root_etags7, op.ne), 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + + step(16, 'unshare file') + if groupnum is 1: + # remove the reshare created before + clients[0].delete_share(share_data.share_id) + + step(17, 'verify etag propagation') + root_etags8 = get_client_etags(clients) + # recipients 2 and 3 aren't affected by the unshare + error_check(compare_list(root_etags7, root_etags8, op.eq), 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags7, root_etags8)) + +@add_worker +def recipient_3(step): + if finish_if_not_capable(): + return + + groupnum = parse_worker_number(reflection.getProcessName()) + group = get_group_name(groupnum) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + root_etags = get_client_etags(clients) + + step(11, 'verify etag propagation') + root_etags5 = get_client_etags(clients) + error_check(compare_list(root_etags, root_etags5, op.ne), 'recipient 2 reshares /test to recipient 4 ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags5)) + + step(13, 'verify etag propagation') + root_etags6 = get_client_etags(clients) + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + run_group_ocsync(d, group) + + step(14, 'upload file') + # just the first first user of the group uploads the file + createfile(os.path.join(d, str(group_map[group][0]), 'test', 'test4.txt'), '4', count=1000, bs=10) + run_group_ocsync(d, group) + + step(15, 'verify etag propagation') + root_etags7 = get_client_etags(clients) + error_check(compare_list(root_etags6, root_etags7, op.ne), 'recipient 4 uploads /test/test4.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + + step(17, 'verify etag propagation') + root_etags8 = get_client_etags(clients) + error_check(compare_list(root_etags7, root_etags8, op.ne), 'recipient 2 unshares the reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags7, root_etags8)) + +for i in range(1,3): + add_worker(recipients, name='recipients_%s' % (i,)) + diff --git a/lib/owncloud/test_sharePropagationInside.py b/lib/owncloud/test_sharePropagationInside.py new file mode 100644 index 0000000..0983eb8 --- /dev/null +++ b/lib/owncloud/test_sharePropagationInside.py @@ -0,0 +1,407 @@ +__doc__ = """ ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| step | owner | R1 | R2 | R3 | R4 | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 2 | create dir | create dir | create dir | create dir | create dir | +| | share /test | | | | | +| | -> R1 R2 | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 3 | | | reshare /test | | | +| | | | -> R3 | | | +| | | | reshare /test/sub | | | +| | | | -> R4 | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 4 | get etags | get etags | get etags | get etags | get etags | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 5 | upload to | | | | | +| | -> /test | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 6 | propagation | propagation | propagation | propagation | NOT propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 7 | | | upload to | | | +| | | | -> /test | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 8 | propagation | propagation | propagation | propagation | NOT propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 9 | upload to | | | | | +| | -> /test/sub | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 10 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 11 | | upload to | | | | +| | | -> /test/sub | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 12 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 13 | | | | | upload to /sub | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 14 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 15 | | | unshare | | | +| | | | -> /test/sub | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 16 | NOT propagation | NOT | NOT propagation | NOT | propagation | +| | | propagation | | propagation | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +""" +from smashbox.utilities import * +import itertools +import os.path +import re + +@add_worker +def setup(step): + + step(1, 'create test users') + + num_users = 5 + + # Create additional accounts + if config.oc_number_test_users < num_users: + for i in range(config.oc_number_test_users + 1, num_users + 1): + username = "%s%i" % (config.oc_account_name, i) + delete_owncloud_account(username) + create_owncloud_account(username, config.oc_account_password) + login_owncloud_account(username, config.oc_account_password) + + check_users(num_users) + +@add_worker +def owner(step): + user = '%s%i' % (config.oc_account_name, 1) + + step (2, 'Create workdir') + d = make_workdir() + + mkdir(os.path.join(d, 'test', 'sub')) + run_ocsync(d, user_num=1) + + client = get_oc_api() + client.login(user, config.oc_account_password) + # make sure folder is shared + user2 = '%s%i' % (config.oc_account_name, 2) + share1_data = client.share_file_with_user('/test', user2, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (user2,)) + + user3 = '%s%i' % (config.oc_account_name, 3) + share1_data = client.share_file_with_user('/test', user3, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (user3,)) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + test_etag = client.file_info('/test').get_etag() + + step(5, 'Upload to /test') + createfile(os.path.join(d, 'test', 'test2.txt'), '2', count=1000, bs=10) + run_ocsync(d, user_num=1) + + step(6, 'verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(8, 'verify etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(9, 'Upload to /test/sub') + createfile(os.path.join(d, 'test', 'sub', 'test4.txt'), '4', count=1000, bs=10) + run_ocsync(d, user_num=1) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + test_etag2 = client.file_info('/test').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(test_etag != test_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag, test_etag2)) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + test_etag3 = client.file_info('/test').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(test_etag2 != test_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag2, test_etag3)) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + test_etag4 = client.file_info('/test').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(test_etag3 != test_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag3, test_etag4)) + + step(16, 'verify etag is NOT propagated') + root_etag7 = client.file_info('/').get_etag() + test_etag5 = client.file_info('/test').get_etag() + error_check(root_etag6 == root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + error_check(test_etag4 == test_etag5, 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag4, test_etag5)) + +@add_worker +def recipient1(step): + + user = '%s%i' % (config.oc_account_name, 2) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=2) + + client = get_oc_api() + client.login(user, config.oc_account_password) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + test_etag = client.file_info('/test').get_etag() + + step(6, 'verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(8, 'verify etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + test_etag2 = client.file_info('/test').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(test_etag != test_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag, test_etag2)) + + step(11, 'Upload to /test/sub') + run_ocsync(d, user_num=2) + createfile(os.path.join(d, 'test', 'sub', 'test5.txt'), '5', count=1000, bs=10) + run_ocsync(d, user_num=2) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + test_etag3 = client.file_info('/test').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(test_etag2 != test_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag2, test_etag3)) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + test_etag4 = client.file_info('/test').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(test_etag3 != test_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag3, test_etag4)) + + step(16, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + test_etag5 = client.file_info('/test').get_etag() + # not affected by the unshare + error_check(root_etag6 == root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + error_check(test_etag4 == test_etag5, 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag4, test_etag5)) + +@add_worker +def recipient2(step): + + user = '%s%i' % (config.oc_account_name, 3) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=3) + + client = get_oc_api() + client.login(user, config.oc_account_password) + root_etag = client.file_info('/').get_etag() + + user4 = '%s%i' % (config.oc_account_name, 4) + user5 = '%s%i' % (config.oc_account_name, 5) + + step(3, 'Reshare /test folder with %s and /test/sub with %s' % (user4, user5)) + + share1_data = client.share_file_with_user('/test', user4, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (user4,)) + share2_data = client.share_file_with_user('/test/sub', user5, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (user5,)) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + test_etag = client.file_info('/test').get_etag() + + step(6, 'verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(7, 'Upload to /test') + run_ocsync(d, user_num=3) + createfile(os.path.join(d, 'test', 'test3.txt'), '3', count=1000, bs=10) + run_ocsync(d, user_num=3) + + step(8, 'verify etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + test_etag2 = client.file_info('/test').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(test_etag != test_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag, test_etag2)) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + test_etag3 = client.file_info('/test').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(test_etag2 != test_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag2, test_etag3)) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + test_etag4 = client.file_info('/test').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(test_etag3 != test_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag3, test_etag4)) + + step(15, 'Unshare reshared /test/sub') + client.delete_share(share2_data.share_id) + + step(16, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + test_etag5 = client.file_info('/test').get_etag() + error_check(root_etag6 == root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + error_check(test_etag4 == test_etag5, 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag4, test_etag5)) + +@add_worker +def recipient3(step): + + user = '%s%i' % (config.oc_account_name, 4) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=4) + + client = get_oc_api() + client.login(user, config.oc_account_password) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + test_etag = client.file_info('/test').get_etag() + + step(6, 'verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(8, 'verify etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + test_etag2 = client.file_info('/test').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(test_etag != test_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag, test_etag2)) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + test_etag3 = client.file_info('/test').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(test_etag2 != test_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag2, test_etag3)) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + test_etag4 = client.file_info('/test').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(test_etag3 != test_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag3, test_etag4)) + + step(16, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + test_etag5 = client.file_info('/test').get_etag() + error_check(root_etag6 == root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + error_check(test_etag4 == test_etag5, 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag4, test_etag5)) + +@add_worker +def recipient4(step): + + user = '%s%i' % (config.oc_account_name, 5) + + step (2, 'Create workdir') + + d = make_workdir() + run_ocsync(d, user_num=5) + + client = get_oc_api() + client.login(user, config.oc_account_password) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + sub_etag = client.file_info('/sub').get_etag() + + step(6, 'verify etag is NOT propagated') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag == root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(8, 'verify etag is NOT propagated') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 == root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + sub_etag2 = client.file_info('/sub').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(sub_etag != sub_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /sub previous [%s] new [%s]' % (sub_etag, sub_etag2)) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + sub_etag3 = client.file_info('/sub').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(sub_etag2 != sub_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /sub previous [%s] new [%s]' % (sub_etag2, sub_etag3)) + + step(13, 'Upload to /sub') + run_ocsync(d, user_num=5) + createfile(os.path.join(d, 'sub', 'test6.txt'), '6', count=1000, bs=10) + run_ocsync(d, user_num=5) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + sub_etag4 = client.file_info('/sub').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(sub_etag3 != sub_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /sub previous [%s] new [%s]' % (sub_etag3, sub_etag4)) + + step(16, 'verify etag propagation') + root_etag7 = client.file_info('/').get_etag() + error_check(root_etag6 != root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + # /sub folder should be deleted at this point, so no checking + diff --git a/lib/owncloud/test_sharePropagationInsideGroups.py b/lib/owncloud/test_sharePropagationInsideGroups.py new file mode 100644 index 0000000..193cf50 --- /dev/null +++ b/lib/owncloud/test_sharePropagationInsideGroups.py @@ -0,0 +1,508 @@ +__doc__ = """ ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| step | owner | R1 | R2 | R3 | R4 | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 2 | create dir | create dir | create dir | create dir | create dir | +| | share /test | | | | | +| | -> R1 R2 | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 3 | | | reshare /test | | | +| | | | -> R3 | | | +| | | | reshare /test/sub | | | +| | | | -> R4 | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 4 | get etags | get etags | get etags | get etags | get etags | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 5 | upload to | | | | | +| | -> /test | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 6 | propagation | propagation | propagation | propagation | NOT propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 7 | | | upload to | | | +| | | | -> /test | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 8 | propagation | propagation | propagation | propagation | NOT propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 9 | upload to | | | | | +| | -> /test/sub | | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 10 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 11 | | upload to | | | | +| | | -> /test/sub | | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 12 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 13 | | | | | upload to /sub | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 14 | propagation | propagation | propagation | propagation | propagation | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 15 | | | unshare | | | +| | | | -> /test/sub | | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +| 16 | NOT propagation | NOT | NOT propagation | NOT | propagation | +| | | propagation | | propagation | | ++-------+-----------------+----------------+-------------------+-------------+-----------------+ +""" +from smashbox.utilities import * +import itertools +import os.path +import re +import operator as op + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { + 'use_new_dav_endpoint':False + }, + { + 'use_new_dav_endpoint':True + } +] + +def get_group_name(i): + return '%s%i' % (config.oc_group_name, i) + +def get_account_name(i): + return '%s%i' % (config.oc_account_name, i) + +group_map = { + # maps the group name with the usernum belonging to the group + get_group_name(1) : [2,3], + get_group_name(2) : [4,5], + get_group_name(3) : [6,7], + get_group_name(4) : [8,9], +} + +def compare_list(list1, list2, func): + """ + Compare the list item by item using the function func. If func returns False, compare list + will return False + """ + if len(list1) != len(list2): + return False + + for index in range(0, len(list1)): + if not func(list1[index], list2[index]): + return False + return True + +def get_client_etags(clients, path): + new_etags = [] + for client in clients: + new_etags.append(client.file_info(path).get_etag()) + + return new_etags + +def run_group_ocsync(d, group_name): + for usernum in group_map[group_name]: + run_ocsync(os.path.join(d, str(usernum)), user_num=usernum, use_new_dav_endpoint=use_new_dav_endpoint) + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def setup(step): + if finish_if_not_capable(): + return + + step(1, 'create test users') + num_users = 9 + + # Create additional accounts + if config.oc_number_test_users < num_users: + for i in range(config.oc_number_test_users + 1, num_users + 1): + username = "%s%i" % (config.oc_account_name, i) + delete_owncloud_account(username) + create_owncloud_account(username, config.oc_account_password) + login_owncloud_account(username, config.oc_account_password) + + check_users(num_users) + reset_owncloud_group(num_groups=4) + + for group in group_map: + for user in group_map[group]: + add_user_to_group(get_account_name(user), group) + +@add_worker +def owner(step): + if finish_if_not_capable(): + return + + user = '%s%i' % (config.oc_account_name, 1) + + step (2, 'Create workdir') + d = make_workdir() + + mkdir(os.path.join(d, 'test', 'sub')) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(user, config.oc_account_password) + # make sure folder is shared + group1 = get_group_name(1) + share1_data = client.share_file_with_group('/test', group1, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group1,)) + + group2 = get_group_name(2) + share2_data = client.share_file_with_group('/test', group2, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group2,)) + + step(4, 'get base etags to compare') + root_etag = client.file_info('/').get_etag() + test_etag = client.file_info('/test').get_etag() + + step(5, 'Upload to /test') + createfile(os.path.join(d, 'test', 'test2.txt'), '2', count=1000, bs=10) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + step(6, 'verify etag propagation') + root_etag2 = client.file_info('/').get_etag() + error_check(root_etag != root_etag2, 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag, root_etag2)) + + step(8, 'verify etag propagation') + root_etag3 = client.file_info('/').get_etag() + error_check(root_etag2 != root_etag3, 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag2, root_etag3)) + + step(9, 'Upload to /test/sub') + createfile(os.path.join(d, 'test', 'sub', 'test4.txt'), '4', count=1000, bs=10) + run_ocsync(d, user_num=1, use_new_dav_endpoint=use_new_dav_endpoint) + + step(10, 'verify etag propagation') + root_etag4 = client.file_info('/').get_etag() + test_etag2 = client.file_info('/test').get_etag() + error_check(root_etag3 != root_etag4, 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag3, root_etag4)) + error_check(test_etag != test_etag2, 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag, test_etag2)) + + step(12, 'verify etag propagation') + root_etag5 = client.file_info('/').get_etag() + test_etag3 = client.file_info('/test').get_etag() + error_check(root_etag4 != root_etag5, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etag4, root_etag5)) + error_check(test_etag2 != test_etag3, 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etag2, test_etag3)) + + step(14, 'verify etag propagation') + root_etag6 = client.file_info('/').get_etag() + test_etag4 = client.file_info('/test').get_etag() + error_check(root_etag5 != root_etag6, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag5, root_etag6)) + error_check(test_etag3 != test_etag4, 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag3, test_etag4)) + + step(16, 'verify etag is NOT propagated') + root_etag7 = client.file_info('/').get_etag() + test_etag5 = client.file_info('/test').get_etag() + error_check(root_etag6 == root_etag7, 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etag6, root_etag7)) + error_check(test_etag4 == test_etag5, 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etag4, test_etag5)) + +@add_worker +def recipient1(step): + if finish_if_not_capable(): + return + + group = get_group_name(1) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + step(4, 'get base etags to compare') + root_etags = get_client_etags(clients, '/') + test_etags = get_client_etags(clients, '/test') + + step(6, 'verify etag propagation') + root_etags2 = get_client_etags(clients, '/') + error_check(compare_list(root_etags, root_etags2, op.ne), 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags2)) + + step(8, 'verify etag propagation') + root_etags3 = get_client_etags(clients, '/') + error_check(compare_list(root_etags2, root_etags3, op.ne), 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags2, root_etags3)) + + step(10, 'verify etag propagation') + root_etags4 = get_client_etags(clients, '/') + test_etags2 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags3, root_etags4, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags3, root_etags4)) + error_check(compare_list(test_etags, test_etags2, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags, test_etags2)) + + step(11, 'Upload to /test/sub') + run_group_ocsync(d, group) + createfile(os.path.join(d, str(group_map[group][0]), 'test', 'sub', 'test5.txt'), '5', count=1000, bs=10) + run_group_ocsync(d, group) + + step(12, 'verify etag propagation') + root_etags5 = get_client_etags(clients, '/') + test_etags3 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags4, root_etags5, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags4, root_etags5)) + error_check(compare_list(test_etags2, test_etags3, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags2, test_etags3)) + + step(14, 'verify etag propagation') + root_etags6 = get_client_etags(clients, '/') + test_etags4 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + error_check(compare_list(test_etags3, test_etags4, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags3, test_etags4)) + + step(16, 'verify etag propagation') + root_etags7 = get_client_etags(clients, '/') + test_etags5 = get_client_etags(clients, '/test') + # not affected by the unshare + error_check(compare_list(root_etags6, root_etags7, op.eq), 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + error_check(compare_list(test_etags4, test_etags5, op.eq), 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags4, test_etags5)) + +@add_worker +def recipient2(step): + if finish_if_not_capable(): + return + + group = get_group_name(2) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + group3 = get_group_name(3) + group4 = get_group_name(4) + + step(3, 'Reshare /test folder with %s and /test/sub with %s' % (group3, group4)) + + # only the first user of the group shares with another group, to keep it simple + share1_data = clients[0].share_file_with_group('/test', group3, perms=31) + fatal_check(share1_data, 'failed sharing a file with %s' % (group3,)) + share2_data = clients[0].share_file_with_group('/test/sub', group4, perms=31) + fatal_check(share2_data, 'failed sharing a file with %s' % (group4,)) + + step(4, 'get base etags to compare') + root_etags = get_client_etags(clients, '/') + test_etags = get_client_etags(clients, '/test') + + step(6, 'verify etag propagation') + root_etags2 = get_client_etags(clients, '/') + error_check(compare_list(root_etags, root_etags2, op.ne), 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags2)) + + step(7, 'Upload to /test') + run_group_ocsync(d, group) + createfile(os.path.join(d, str(group_map[group][0]), 'test', 'test3.txt'), '3', count=1000, bs=10) + run_group_ocsync(d, group) + + step(8, 'verify etag propagation') + root_etags3 = get_client_etags(clients, '/') + error_check(compare_list(root_etags2, root_etags3, op.ne), 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags2, root_etags3)) + + step(10, 'verify etag propagation') + root_etags4 = get_client_etags(clients, '/') + test_etags2 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags3, root_etags4, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags3, root_etags4)) + error_check(compare_list(test_etags, test_etags2, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags, test_etags2)) + + step(12, 'verify etag propagation') + root_etags5 = get_client_etags(clients, '/') + test_etags3 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags4, root_etags5, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags4, root_etags5)) + error_check(compare_list(test_etags2, test_etags3, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags2, test_etags3)) + + step(14, 'verify etag propagation') + root_etags6 = get_client_etags(clients, '/') + test_etags4 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + error_check(compare_list(test_etags3, test_etags4, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags3, test_etags4)) + + step(15, 'Unshare reshared /test/sub') + clients[0].delete_share(share2_data.share_id) + + step(16, 'verify etag propagation') + root_etags7 = get_client_etags(clients, '/') + test_etags5 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags6, root_etags7, op.eq), 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + error_check(compare_list(test_etags4, test_etags5, op.eq), 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags4, test_etags5)) + +@add_worker +def recipient3(step): + if finish_if_not_capable(): + return + + group = get_group_name(3) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + step(4, 'get base etags to compare') + root_etags = get_client_etags(clients, '/') + test_etags = get_client_etags(clients, '/test') + + step(6, 'verify etag propagation') + root_etags2 = get_client_etags(clients, '/') + error_check(compare_list(root_etags, root_etags2, op.ne), 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags2)) + + step(8, 'verify etag propagation') + root_etags3 = get_client_etags(clients, '/') + error_check(compare_list(root_etags2, root_etags3, op.ne), 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags2, root_etags3)) + + step(10, 'verify etag propagation') + root_etags4 = get_client_etags(clients, '/') + test_etags2 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags3, root_etags4, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags3, root_etags4)) + error_check(compare_list(test_etags, test_etags2, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags, test_etags2)) + + step(12, 'verify etag propagation') + root_etags5 = get_client_etags(clients, '/') + test_etags3 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags4, root_etags5, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags4, root_etags5)) + error_check(compare_list(test_etags2, test_etags3, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /test previous [%s] new [%s]' % (test_etags2, test_etags3)) + + step(14, 'verify etag propagation') + root_etags6 = get_client_etags(clients, '/') + test_etags4 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + error_check(compare_list(test_etags3, test_etags4, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags3, test_etags4)) + + step(16, 'verify etag propagation') + root_etags7 = get_client_etags(clients, '/') + test_etags5 = get_client_etags(clients, '/test') + error_check(compare_list(root_etags6, root_etags7, op.eq), 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + error_check(compare_list(test_etags4, test_etags5, op.eq), 'recipient 2 unshares reshare ' + 'etag for /test previous [%s] new [%s]' % (test_etags4, test_etags5)) + +@add_worker +def recipient4(step): + if finish_if_not_capable(): + return + + group = get_group_name(4) + + step (2, 'Create workdir') + + d = make_workdir() + for usernum in group_map[group]: + mkdir(os.path.join(d, str(usernum))) + + run_group_ocsync(d, group) + + clients = [] + for usernum in group_map[group]: + client = get_oc_api(use_new_dav_endpoint=use_new_dav_endpoint) + client.login(get_account_name(usernum), config.oc_account_password) + clients.append(client) + + step(4, 'get base etags to compare') + root_etags = get_client_etags(clients, '/') + sub_etags = get_client_etags(clients, '/sub') + + step(6, 'verify etag is NOT propagated') + root_etags2 = get_client_etags(clients, '/') + error_check(compare_list(root_etags, root_etags2, op.eq), 'owner uploads to /test/test2.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags, root_etags2)) + + step(8, 'verify etag is NOT propagated') + root_etags3 = get_client_etags(clients, '/') + error_check(compare_list(root_etags2, root_etags3, op.eq), 'recipient2 uploads to /test/test3.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags2, root_etags3)) + + step(10, 'verify etag propagation') + root_etags4 = get_client_etags(clients, '/') + sub_etags2 = get_client_etags(clients, '/sub') + error_check(compare_list(root_etags3, root_etags4, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags3, root_etags4)) + error_check(compare_list(sub_etags, sub_etags2, op.ne), 'owner uploads to /test/sub/test4.txt ' + 'etag for /sub previous [%s] new [%s]' % (sub_etags, sub_etags2)) + + step(12, 'verify etag propagation') + root_etags5 = get_client_etags(clients, '/') + sub_etags3 = get_client_etags(clients, '/sub') + error_check(compare_list(root_etags4, root_etags5, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for / previous [%s] new [%s]' % (root_etags4, root_etags5)) + error_check(compare_list(sub_etags2, sub_etags3, op.ne), 'recipient 1 uploads to /test/sub/test5.txt ' + 'etag for /sub previous [%s] new [%s]' % (sub_etags2, sub_etags3)) + + step(13, 'Upload to /sub') + run_group_ocsync(d, group) + createfile(os.path.join(d, str(group_map[group][0]), 'sub', 'test6.txt'), '6', count=1000, bs=10) + run_group_ocsync(d, group) + + step(14, 'verify etag propagation') + root_etags6 = get_client_etags(clients, '/') + sub_etags4 = get_client_etags(clients, '/sub') + error_check(compare_list(root_etags5, root_etags6, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags5, root_etags6)) + error_check(compare_list(sub_etags3, sub_etags4, op.ne), 'recipient 4 uploads to /sub/test6.txt through reshare ' + 'etag for /sub previous [%s] new [%s]' % (sub_etags3, sub_etags4)) + + step(16, 'verify etag propagation') + root_etags7 = get_client_etags(clients, '/') + error_check(compare_list(root_etags6, root_etags7, op.ne), 'recipient 2 unshares reshare ' + 'etag for / previous [%s] new [%s]' % (root_etags6, root_etags7)) + # /sub folder should be deleted at this point, so no checking + diff --git a/lib/test_basicSync.py b/lib/test_basicSync.py index 615d40a..6592cfb 100644 --- a/lib/test_basicSync.py +++ b/lib/test_basicSync.py @@ -20,7 +20,7 @@ from smashbox.utilities import * import glob - +import time filesizeKB = int(config.get('basicSync_filesizeKB',10000)) @@ -30,6 +30,9 @@ # subdirectory where to put files (if empty then use top level workdir) subdirPath = config.get('basicSync_subdirPath',"") +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) #### testsets = [ #### { 'basicSync_filesizeKB': 1, @@ -88,7 +91,7 @@ def expect_deleted_files(d,expected_deleted_files): def expect_conflict_files(d,expected_conflict_files): - actual_conflict_files = glob.glob(os.path.join(d,'*conflict*.dat')) + actual_conflict_files = get_conflict_files(d) logger.debug('conflict files in %s: %s',d,actual_conflict_files) @@ -105,6 +108,85 @@ def expect_conflict_files(d,expected_conflict_files): def expect_no_conflict_files(d): expect_conflict_files(d,[]) +def final_check(d,shared): + """ This is the final check applicable to all workers - this reflects the status of the remote repository so everyone should be in sync. + The only potential differences are with locally generated conflict files. + """ + + list_files(d) + expect_content(os.path.join(d,'TEST_FILE_MODIFIED_NONE.dat'), shared['md5_creator']) + + expect_content(os.path.join(d,'TEST_FILE_ADDED_LOSER.dat'), shared['md5_loser']) + + if not rmLocalStateDB: + expect_content(os.path.join(d,'TEST_FILE_MODIFIED_LOSER.dat'), shared['md5_loser']) + else: + expect_content(os.path.join(d,'TEST_FILE_MODIFIED_LOSER.dat'), shared['md5_creator']) # in this case, a conflict is created on the loser and file on the server stays the same + + expect_content(os.path.join(d,'TEST_FILE_ADDED_WINNER.dat'), shared['md5_winner']) + expect_content(os.path.join(d,'TEST_FILE_MODIFIED_WINNER.dat'), shared['md5_winner']) + expect_content(os.path.join(d,'TEST_FILE_ADDED_BOTH.dat'), shared['md5_winner']) # a conflict on the loser, server not changed + expect_content(os.path.join(d,'TEST_FILE_MODIFIED_BOTH.dat'), shared['md5_winner']) # a conflict on the loser, server not changed + + if not rmLocalStateDB: + expect_no_deleted_files(d) # normally any deleted files should not come back + else: + expect_deleted_files(d, ['TEST_FILE_DELETED_LOSER.dat', 'TEST_FILE_DELETED_WINNER.dat']) # but not TEST_FILE_DELETED_BOTH.dat ! + expect_content(os.path.join(d,'TEST_FILE_DELETED_LOSER.dat'), shared['md5_creator']) # this file should be downloaded by the loser because it has no other choice (no previous state to compare with) + expect_content(os.path.join(d,'TEST_FILE_DELETED_WINNER.dat'), shared['md5_creator']) # this file should be re-uploaded by the loser because it has no other choice (no previous state to compare with) + +############################################################################### + +def final_check_1_5(d): # this logic applies for 1.5.x client and owncloud server... + """ Final verification: all local sync folders should look the same. We expect conflicts and handling of deleted files depending on the rmLocalStateDB option. See code for details. + """ + import glob + + list_files(d) + + conflict_files = glob.glob(os.path.join(d,'*_conflict-*-*')) + + logger.debug('conflict files in %s: %s',d,conflict_files) + + if not rmLocalStateDB: + # we expect exactly 1 conflict file + + logger.warning("FIXME: currently winner gets a conflict file - exclude list should be updated and this assert modified for the winner") + + error_check(len(conflict_files) == 1, "there should be exactly 1 conflict file (%d)"%len(conflict_files)) + else: + # we expect exactly 3 conflict files + error_check(len(conflict_files) == 3, "there should be exactly 3 conflict files (%d)"%len(conflict_files)) + + for fn in conflict_files: + + if not rmLocalStateDB: + error_check('_BOTH' in fn, """only files modified in BOTH workers have a conflict - all other files should be conflict-free""") + + else: + error_check('_BOTH' in fn or '_LOSER' in fn or '_WINNER' in fn, """files which are modified by ANY worker have a conflict now; files which are not modified should not have a conflict""") + + deleted_files = glob.glob(os.path.join(d,'*_DELETED*')) + + logger.debug('deleted files in %s: %s',d,deleted_files) + + if not rmLocalStateDB: + error_check(len(deleted_files) == 0, 'deleted files should not be there normally') + else: + # deleted files "reappear" if local sync db is lost on the loser, the only file that does not reappear is the DELETED_BOTH which was deleted on *all* local clients + + error_check(len(deleted_files) == 2, "we expect exactly 2 deleted files") + + for fn in deleted_files: + error_check('_LOSER' in fn or '_WINNER' in fn, "deleted files should only reappear if delete on only one client (but not on both at the same time) ") + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False @add_worker def creator(step): @@ -140,11 +222,16 @@ def creator(step): logger.info('md5_creator: %s',shared['md5_creator']) list_files(subdir) - run_ocsync(d) + run_ocsync(subdir, use_new_dav_endpoint=use_new_dav_endpoint) list_files(subdir) + time.sleep(1) + step(7,'download the repository') - run_ocsync(d,n=3) + + run_ocsync(d,n=3, use_new_dav_endpoint=use_new_dav_endpoint) + + time.sleep(1) step(8,'final check') @@ -153,12 +240,15 @@ def creator(step): @add_worker def winner(step): + if finish_if_not_capable(): + return + step(2,'initial sync') d = make_workdir() subdir = os.path.join(d,subdirPath) - run_ocsync(d) + run_ocsync(subdir, use_new_dav_endpoint=use_new_dav_endpoint) step(3,'modify locally and sync to server') @@ -179,7 +269,7 @@ def winner(step): shared['md5_winner'] = md5sum(os.path.join(subdir,'TEST_FILE_ADDED_WINNER.dat')) logger.info('md5_winner: %s',shared['md5_winner']) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) sleep(1.1) # csync: mtime diff < 1s => conflict not detected, see: #5589 https://github.com/owncloud/client/issues/5589 @@ -197,13 +287,15 @@ def winner(step): @add_worker def loser(step): + if finish_if_not_capable(): + return step(2,'initial sync') d = make_workdir() subdir = os.path.join(d,subdirPath) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(4,'modify locally and sync to the server') @@ -242,7 +334,9 @@ def loser(step): remove_file(statedb_files[0]) - run_ocsync(d,n=3) # conflict file will be synced to the server but it requires more than one sync run + run_ocsync(d,n=3, use_new_dav_endpoint=use_new_dav_endpoint) # conflict file will be synced to the server but it requires more than one sync run + + time.sleep(1) step(6,'final sync') run_ocsync(d) @@ -260,88 +354,21 @@ def loser(step): @add_worker def checker(step): + if finish_if_not_capable(): + return + shared = reflection.getSharedObject() step(7,'download the repository for final verification') d = make_workdir() subdir = os.path.join(d,subdirPath) - run_ocsync(d,n=3) + run_ocsync(d,n=3, use_new_dav_endpoint=use_new_dav_endpoint) + + time.sleep(1) step(8,'final check') final_check(subdir,shared) expect_no_conflict_files(subdir) - -def final_check(d,shared): - """ This is the final check applicable to all workers - this reflects the status of the remote repository so everyone should be in sync. - The only potential differences are with locally generated conflict files. - """ - - list_files(d) - expect_content(os.path.join(d,'TEST_FILE_MODIFIED_NONE.dat'), shared['md5_creator']) - - expect_content(os.path.join(d,'TEST_FILE_ADDED_LOSER.dat'), shared['md5_loser']) - - if not rmLocalStateDB: - expect_content(os.path.join(d,'TEST_FILE_MODIFIED_LOSER.dat'), shared['md5_loser']) - else: - expect_content(os.path.join(d,'TEST_FILE_MODIFIED_LOSER.dat'), shared['md5_creator']) # in this case, a conflict is created on the loser and file on the server stays the same - - expect_content(os.path.join(d,'TEST_FILE_ADDED_WINNER.dat'), shared['md5_winner']) - expect_content(os.path.join(d,'TEST_FILE_MODIFIED_WINNER.dat'), shared['md5_winner']) - expect_content(os.path.join(d,'TEST_FILE_ADDED_BOTH.dat'), shared['md5_winner']) # a conflict on the loser, server not changed - expect_content(os.path.join(d,'TEST_FILE_MODIFIED_BOTH.dat'), shared['md5_winner']) # a conflict on the loser, server not changed - - if not rmLocalStateDB: - expect_no_deleted_files(d) # normally any deleted files should not come back - else: - expect_deleted_files(d, ['TEST_FILE_DELETED_LOSER.dat', 'TEST_FILE_DELETED_WINNER.dat']) # but not TEST_FILE_DELETED_BOTH.dat ! - expect_content(os.path.join(d,'TEST_FILE_DELETED_LOSER.dat'), shared['md5_creator']) # this file should be downloaded by the loser because it has no other choice (no previous state to compare with) - expect_content(os.path.join(d,'TEST_FILE_DELETED_WINNER.dat'), shared['md5_creator']) # this file should be re-uploaded by the loser because it has no other choice (no previous state to compare with) - -############################################################################### - -def final_check_1_5(d): # this logic applies for 1.5.x client and owncloud server... - """ Final verification: all local sync folders should look the same. We expect conflicts and handling of deleted files depending on the rmLocalStateDB option. See code for details. - """ - import glob - - list_files(d) - - conflict_files = glob.glob(os.path.join(d,'*_conflict-*-*')) - - logger.debug('conflict files in %s: %s',d,conflict_files) - - if not rmLocalStateDB: - # we expect exactly 1 conflict file - - logger.warning("FIXME: currently winner gets a conflict file - exclude list should be updated and this assert modified for the winner") - - error_check(len(conflict_files) == 1, "there should be exactly 1 conflict file (%d)"%len(conflict_files)) - else: - # we expect exactly 3 conflict files - error_check(len(conflict_files) == 3, "there should be exactly 3 conflict files (%d)"%len(conflict_files)) - - for fn in conflict_files: - - if not rmLocalStateDB: - error_check('_BOTH' in fn, """only files modified in BOTH workers have a conflict - all other files should be conflict-free""") - - else: - error_check('_BOTH' in fn or '_LOSER' in fn or '_WINNER' in fn, """files which are modified by ANY worker have a conflict now; files which are not modified should not have a conflict""") - - deleted_files = glob.glob(os.path.join(d,'*_DELETED*')) - - logger.debug('deleted files in %s: %s',d,deleted_files) - - if not rmLocalStateDB: - error_check(len(deleted_files) == 0, 'deleted files should not be there normally') - else: - # deleted files "reappear" if local sync db is lost on the loser, the only file that does not reappear is the DELETED_BOTH which was deleted on *all* local clients - - error_check(len(deleted_files) == 2, "we expect exactly 2 deleted files") - - for fn in deleted_files: - error_check('_LOSER' in fn or '_WINNER' in fn, "deleted files should only reappear if delete on only one client (but not on both at the same time) ") diff --git a/lib/test_concurrentDirMove.py b/lib/test_concurrentDirMove.py index 4ac03a9..eb2c535 100644 --- a/lib/test_concurrentDirMove.py +++ b/lib/test_concurrentDirMove.py @@ -81,7 +81,7 @@ def adder(step): if delaySeconds<0: sleep(-delaySeconds) - run_ocsync(d) + run_ocsync(d,n=2) # when directory is renamed while file is uploaded the PUT request finishes with Conflict error code diff --git a/lib/test_concurrentDirRemove.py b/lib/test_concurrentDirRemove.py index 8489355..194bf16 100644 --- a/lib/test_concurrentDirRemove.py +++ b/lib/test_concurrentDirRemove.py @@ -23,24 +23,58 @@ filesizeKB = int(config.get('concurrentRemoveDir_filesizeKB',9000)) delaySeconds = int(config.get('concurrentRemoveDir_delaySeconds',3)) # if delaySeconds > 0 then remover waits; else the adder waits; +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + testsets = [ {'concurrentRemoveDir_nfiles':3, 'concurrentRemoveDir_filesizeKB':10000, - 'concurrentRemoveDir_delaySeconds':5 }, # removing the directory while a large file is chunk-uploaded + 'concurrentRemoveDir_delaySeconds':5, + 'use_new_dav_endpoint': True }, # removing the directory while a large file is chunk-uploaded + {'concurrentRemoveDir_nfiles':3, + 'concurrentRemoveDir_filesizeKB':10000, + 'concurrentRemoveDir_delaySeconds':5, + 'use_new_dav_endpoint': False }, # removing the directory while a large file is chunk-uploaded {'concurrentRemoveDir_nfiles':40, 'concurrentRemoveDir_filesizeKB':9000, - 'concurrentRemoveDir_delaySeconds':5 }, # removing the directory while lots of smaller files are uploaded + 'concurrentRemoveDir_delaySeconds':5, + 'use_new_dav_endpoint': True }, # removing the directory while lots of smaller files are uploaded + {'concurrentRemoveDir_nfiles': 40, + 'concurrentRemoveDir_filesizeKB': 9000, + 'concurrentRemoveDir_delaySeconds': 5, + 'use_new_dav_endpoint': False}, # removing the directory while lots of smaller files are uploaded {'concurrentRemoveDir_nfiles':5, 'concurrentRemoveDir_filesizeKB':15000, - 'concurrentRemoveDir_delaySeconds':-5 } # removing the directory before files are uploaded - + 'concurrentRemoveDir_delaySeconds':-5, + 'use_new_dav_endpoint': True }, # removing the directory before files are uploaded + {'concurrentRemoveDir_nfiles':5, + 'concurrentRemoveDir_filesizeKB':15000, + 'concurrentRemoveDir_delaySeconds':-5, + 'use_new_dav_endpoint': False }, # removing the directory before files are uploaded ] +import time +import tempfile + +from smashbox.utilities import * +from smashbox.utilities.hash_files import * + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False @add_worker def creator(step): + if finish_if_not_capable(): + return + reset_owncloud_account() reset_rundir() @@ -48,19 +82,21 @@ def creator(step): d = make_workdir() d2 = os.path.join(d,'subdir') mkdir(d2) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(5,'final check') - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) final_check(d) @add_worker def adder(step): + if finish_if_not_capable(): + return step(2,'sync the empty directory created by the creator') d = make_workdir() - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(3,'locally create content in the subdirectory') d2 = os.path.join(d,'subdir') @@ -71,17 +107,20 @@ def adder(step): step(4,'sync the added files in parallel') if delaySeconds<0: sleep(-delaySeconds) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(5,'final check') - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) @add_worker def remover(step): + if finish_if_not_capable(): + return + step(2,'sync the empty directory created by the creator') d = make_workdir() - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(3,'locally remove subdir') d2 = os.path.join(d,'subdir') @@ -90,18 +129,20 @@ def remover(step): step(4,'sync the removed subdir in parallel') if delaySeconds>0: sleep(delaySeconds) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) step(5,'final check') - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) final_check(d) @add_worker def checker(step): + if finish_if_not_capable(): + return step(5,'sync the final state of the repository into a fresh local folder') d = make_workdir() - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) final_check(d) diff --git a/lib/test_deltamove.py b/lib/test_deltamove.py new file mode 100644 index 0000000..dc7dc1b --- /dev/null +++ b/lib/test_deltamove.py @@ -0,0 +1,185 @@ +import os +import time +import tempfile + + +__doc__ = """ Check that changes to file are also propagated when file is moved + + +-----------+-----------------+------------------+ + | Step | Client1 | Client2 | + +===========+======================+=============+ + | 2 | create ref file | create work dir | + | | and workdir | | + +-----------+-----------------+------------------+ + | 3 | add files and | | + | | sync | | + +-----------+-----------------+------------------+ + | 4 | | sync down | + | | | and check | + +-----------+-----------------+------------------+ + | 5 | mod files and | | + | | sync | | + +-----------+-----------------+------------------+ + | 6 | | sync down | + | | | and check | + +-----------+-----------------+------------------+ + | 7 | | move files | + | | | and modify | + +-----------+-----------------+------------------+ + | 8 | sync files and | | + | | check | | + +-----------+-----------------+------------------+ + | 9 | check checksums | check checksums | + +-----------+-----------------+------------------+ + + """ + +from smashbox.utilities import * +from smashbox.utilities.hash_files import * +from smashbox.utilities.monitoring import commit_to_monitoring + +nfiles = int(config.get('deltamove_nfiles',10)) +filesize = config.get('deltamove_filesize',1000) + +if type(filesize) is type(''): + filesize = eval(filesize) + +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + +testsets = [ + { 'deltamove_filesize': OWNCLOUD_CHUNK_SIZE(0.01), + 'deltamove_nfiles':2, + 'use_new_dav_endpoint':True + }, + { 'deltamove_filesize': OWNCLOUD_CHUNK_SIZE(3.5), + 'deltamove_nfiles':2, + 'use_new_dav_endpoint':True + }, + + ] + +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + +@add_worker +def worker0(step): + if finish_if_not_capable(): + return + + # do not cleanup server files from previous run + reset_owncloud_account() + + # cleanup all local files for the test + reset_rundir() + + step(1,'Preparation') + d = make_workdir() + + # create the test file + createfile(os.path.join(d,"TEST_FILE_MODIFIED.dat"),'0',count=1,bs=filesize) + modify_file(os.path.join(d,"TEST_FILE_MODIFIED.dat"),'1',count=1,bs=1000) + modify_file(os.path.join(d,"TEST_FILE_MODIFIED.dat"),'2',count=1,bs=1000) + checksum_reference = md5sum(os.path.join(d,"TEST_FILE_MODIFIED.dat")) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k0 = count_files(d) + + step(3,'Add %s files and check if we still have k1+nfiles after resync'%nfiles) + + logger.log(35,"Timestamp %f Files %d Size %d",time.time(),nfiles,filesize) + + for i in range(nfiles): + createfile(os.path.join(d,"TEST_FILE_MODIFIED_%d.dat"%(i)),'0',count=1,bs=filesize) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k1 = count_files(d) + + error_check(k1-k0==nfiles,'Expecting to have %d files more: see k1=%d k0=%d'%(nfiles,k1,k0)) + + step(5,"Modify files") + + for i in range(nfiles): + modify_file(os.path.join(d,"TEST_FILE_MODIFIED_%d.dat"%(i)),'1',count=1,bs=1000) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k2 = count_files(d) + + error_check(k2-k0==nfiles,'Expecting to have %d files: see k2=%d k0=%d'%(nfiles,k2,k0)) + + step(8,'Check moved and modified') + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k3 = count_files(d) + error_check(k3-k0==nfiles,'Expecting to have %d files: see k3=%d k0=%d'%(nfiles,k3,k0)) + + step(9, "Final report") + + for i in range(nfiles): + checksum = md5sum(os.path.join(d,"TEST_FILE_MODIFIED_MOVED_%d.dat"%(i))) + error_check(checksum==checksum_reference,'Expecting to have equal checksums, got %s instead of %s'%(checksum,checksum_reference)) + + logger.info('SUCCESS: %d files found',k2) + +@add_worker +def worker1(step): + if finish_if_not_capable(): + return + + step(2,'Preparation') + d = make_workdir() + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + checksum_reference = md5sum(os.path.join(d,"TEST_FILE_MODIFIED.dat")) + k0 = count_files(d) + + step(4,'Resync and check files added by worker0') + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k1 = count_files(d) + + error_check(k1-k0==nfiles,'Expecting to have %d files more: see k1=%d k0=%d'%(nfiles,k1,k0)) + + step(6,'Resync and check files modified by worker0') + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k2 = count_files(d) + + error_check(k2-k0==nfiles,'Expecting to have %d files: see k2=%d k0=%d'%(nfiles,k2,k0)) + + step(7,'Move and modify') + + for i in range(nfiles): + mv(os.path.join(d,"TEST_FILE_MODIFIED_%d.dat"%(i)), os.path.join(d,"TEST_FILE_MODIFIED_MOVED_%d.dat"%(i))) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + for i in range(nfiles): + modify_file(os.path.join(d,"TEST_FILE_MODIFIED_MOVED_%d.dat"%(i)),'2',count=1,bs=1000) + + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + + k3 = count_files(d) + + error_check(k3-k0==nfiles,'Expecting to have %d files: see k3=%d k0=%d'%(nfiles,k3,k0)) + + step(8,"Final report") + for i in range(nfiles): + checksum = md5sum(os.path.join(d,"TEST_FILE_MODIFIED_MOVED_%d.dat"%(i))) + error_check(checksum==checksum_reference,'Expecting to have equal checksums, got %s instead of %s'%(checksum,checksum_reference)) + + + + + diff --git a/lib/test_fileDownloadAbort.py b/lib/test_fileDownloadAbort.py new file mode 100644 index 0000000..d1de31c --- /dev/null +++ b/lib/test_fileDownloadAbort.py @@ -0,0 +1,144 @@ +import os +import requests +import random + +__doc__ = """ Download a file and abort before the end of the transfer. +""" + +from smashbox.utilities import * +from smashbox.utilities.hash_files import * + +filesize = config.get('fileDownloadAbort_filesize', 900000000) +iterations = config.get('fileDownloadAbort_iterations', 25) + +if type(filesize) is type(''): + filesize = eval(filesize) + +testsets = [ + { 'fileDownloadAbort_filesize': 900000000, + 'fileDownloadAbort_iterations': 25 + } +] + +@add_worker +def main(step): + + step(1, 'Preparation') + + # cleanup server files from previous run + reset_owncloud_account(num_test_users=1) + check_users(1) + + # cleanup all local files for the test + reset_rundir() + + d = make_workdir() + run_ocsync(d,user_num=1) + + step(2, 'Add a file: filesize=%s'%filesize) + + create_hashfile(d,filemask='BLOB.DAT',size=filesize) + list_files(d) + run_ocsync(d,user_num=1) + list_files(d) + + reset_server_log_file(True) + + step(3, 'Create link share') + user1 = "%s%i"%(config.oc_account_name, 1) + + oc_api = get_oc_api() + oc_api.login(user1, config.oc_account_password) + + share = oc_api.share_file_with_link('BLOB.DAT', perms=31) + share_url = share.get_link() + '/download' + + # Start testing + test_urls = [ + { + 'url': oc_public_webdav_url(), + 'auth': (share.get_token(), ''), + 'description': 'Public webdav URL' + }, + { + 'url': share.get_link() + '/download', + 'auth': None, + 'description': 'Link share URL' + }, + { + 'url': os.path.join(oc_webdav_url(), 'BLOB.DAT'), + 'auth': (user1, config.oc_account_password), + 'description': 'Webdav URL' + }, + ] + + stepCount = 4 + + for test_url in test_urls: + cases = [ + {'use_range': False, 'abort': True, 'description': 'download abort'}, + {'use_range': True, 'abort': True, 'description': 'range download abort'}, + {'use_range': False, 'abort': False, 'description': 'full download'}, + {'use_range': True, 'abort': False, 'description': 'range download'}, + ] + + for case in cases: + step(stepCount, test_url['description'] + ' ' + case['description']); + for i in range(1, iterations): + test_download(i, test_url['url'], test_url['auth'], case['use_range'], case['abort']) + check_and_reset_logs() + stepCount += 1 + +def check_and_reset_logs(): + d = make_workdir() + scrape_log_file(d, True) + reset_server_log_file(True) + + if len(reported_errors) > 0: + raise AssertionError('Errors found in log, aborting') + +def test_download(i, url, auth = None, use_range = False, abort = False): + + if use_range: + range_start = random.randint(8192, filesize) + range_end = random.randint(range_start, filesize - 8192) + else: + range_start = 0 + range_end = filesize + + if abort: + break_bytes = random.randint(range_start + 8192, range_end - 8192) + + text = 'Download iteration %i' % i + + headers = {} + if use_range: + headers['Range'] = 'bytes=%i-%i' % (range_start, range_end) + text += ' with range %s' % headers['Range'] + + if abort: + text += ' aborting after %i bytes' % break_bytes + + text += ' of total size %i ' % filesize + + text += ' url %s' % url + + logger.info(text) + + res = requests.get(url, auth=auth, stream=True, headers=headers) + + if use_range: + expected_status_code = 206 + else: + expected_status_code = 200 + + error_check(res.status_code == expected_status_code, 'Could not download, status code %i' % res.status_code) + + read_bytes = 0; + for chunk in res.iter_content(8192): + read_bytes += len(chunk) + if abort and read_bytes >= break_bytes: + break + + res.close() + diff --git a/lib/test_nplusone.py b/lib/test_nplusone.py index 9da7867..ea9bb83 100755 --- a/lib/test_nplusone.py +++ b/lib/test_nplusone.py @@ -34,6 +34,10 @@ if type(filesize) is type(''): filesize = eval(filesize) +# True => use new webdav endpoint (dav/files) +# False => use old webdav endpoint (webdav) +use_new_dav_endpoint = bool(config.get('use_new_dav_endpoint',True)) + testsets = [ { 'nplusone_filesize': 1000, 'nplusone_nfiles':100 @@ -57,8 +61,18 @@ ] +def finish_if_not_capable(): + # Finish the test if some of the prerequisites for this test are not satisfied + if compare_oc_version('10.0', '<') and use_new_dav_endpoint == True: + #Dont test for <= 9.1 with new endpoint, since it is not supported + logger.warn("Skipping test since webdav endpoint is not capable for this server version") + return True + return False + @add_worker -def worker0(step): +def worker0(step): + if finish_if_not_capable(): + return # do not cleanup server files from previous run reset_owncloud_account() @@ -68,7 +82,7 @@ def worker0(step): step(1,'Preparation') d = make_workdir() - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) k0 = count_files(d) step(2,'Add %s files and check if we still have k1+nfiles after resync'%nfiles) @@ -97,7 +111,7 @@ def worker0(step): ncorrupt = analyse_hashfiles(d)[2] fatal_check(ncorrupt==0, 'Corrupted files ON THE FILESYSTEM (%s) found'%ncorrupt) - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) ncorrupt = analyse_hashfiles(d)[2] @@ -125,14 +139,19 @@ def worker0(step): @add_worker def worker1(step): + if finish_if_not_capable(): + return + step(1,'Preparation') d = make_workdir() - run_ocsync(d) + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) k0 = count_files(d) step(3,'Resync and check files added by worker0') - run_ocsync(d) + time0=time.time() + run_ocsync(d, use_new_dav_endpoint=use_new_dav_endpoint) + time1=time.time() ncorrupt = analyse_hashfiles(d)[2] k1 = count_files(d) @@ -144,7 +163,8 @@ def worker1(step): fatal_check(ncorrupt==0, 'Corrupted files (%d) found'%ncorrupt) #Massimo 12-APR - + step(4,"Final report") + push_to_monitoring("%s.elapsed" % source,time1-time0) diff --git a/lib/test_pingpong.py b/lib/test_pingpong.py index 38599a4..7a8f45f 100644 --- a/lib/test_pingpong.py +++ b/lib/test_pingpong.py @@ -89,7 +89,8 @@ def ping(step): step(90, "Verification if files moved at all") - error_check( len(glob.glob(os.path.join(d,'*_conflict-*-*'))) == 0, "Conflicts found!") + conflict_files = get_conflict_files(d) + error_check( len(conflict_files) == 0, "Conflicts found!") @add_worker def pong(step): @@ -149,5 +150,5 @@ def pong(step): # FIXME: check if versions have been correctly created on the server - - error_check( len(glob.glob(os.path.join(d,'*_conflict-*-*'))) == 0, "Conflicts found!") + conflict_files = get_conflict_files(d) + error_check( len(conflict_files) == 0, "Conflicts found!") diff --git a/lib/test_userload.py b/lib/test_userload.py index 40c9a0b..8aa6e0b 100755 --- a/lib/test_userload.py +++ b/lib/test_userload.py @@ -22,6 +22,7 @@ hash_filemask = 'hash_{md5}' +from smashbox.utilities import * from smashbox.utilities.hash_files import * @add_worker diff --git a/protocol/protocol.md b/protocol/protocol.md index ba18f7b..278ef23 100644 --- a/protocol/protocol.md +++ b/protocol/protocol.md @@ -237,6 +237,8 @@ Reponse body example: +Besides the actual quota sizes in bytes the server also returns the fileId and the permissions +of the top directory. ### Connection Validation Call @@ -303,7 +305,7 @@ Response body example: < PROPFIND - + /remote.php/webdav/ @@ -317,15 +319,16 @@ Response body example: /remote.php/webdav/%d7%91%d7%a2%d7%91%d7%a8%d7%99%d7%aa-.txt - - "93ae1a06ce4340d6502496228f43718d" - - HTTP/1.1 200 OK + + 00004227ocobzus5kn6s + RDNVW + "ed3bcb4907f9ebdfd8998242993545ba" + + HTTP/1.1 200 OK - + - Client returns with a listing of all top level files and directories with their meta data. Comparing the ETag of the toplevel directory with the one from the previous call, client can detect data changes on the server. In case the top level ETag changed, client can diff --git a/protocol/protocol.py b/protocol/protocol.py index 6437ae6..4c820a7 100644 --- a/protocol/protocol.py +++ b/protocol/protocol.py @@ -190,7 +190,12 @@ def stat_top_level(url,depth=0): """ client = smashbox.curl.Client() + + # TODO: check if etag is quoted r = client.PROPFIND(url,query,depth=depth) + + for x in r.propfind_response: + print x return r def all_prop_android(url,depth=0): @@ -231,7 +236,13 @@ def ls_prop_desktop17(url,depth=0): """ client = smashbox.curl.Client() + + # make sure etag is quoted + r=client.PROPFIND(url,query,depth=depth) + + for x in r.propfind_response: + print x return r diff --git a/python/smashbox/configgen/__init__.py b/python/smashbox/configgen/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/python/smashbox/configgen/generator.py b/python/smashbox/configgen/generator.py new file mode 100644 index 0000000..224380e --- /dev/null +++ b/python/smashbox/configgen/generator.py @@ -0,0 +1,110 @@ +import smashbox.configgen.processors as processors + +class Generator(object): + ''' + Class to generate configuration files. + + You need to set a processor chain in order to process the dict-like object + and write it into a file. If no processor is set, it will write the same object + + Result may vary depending the processor chain being used + ''' + def __init__(self, processor_list = None): + ''' + Initialize the object with the processor chain set, or with an empty chain + if None + ''' + self.processor_list = [] if processor_list == None else processor_list + + def insert_processor(self, i, processor): + ''' + Insert a new processor in the "i" position + Check list.insert for details + ''' + self.processor_list.insert(i, processor) + + def append_processor(self, processor): + ''' + Append the processor to the end of the chain + ''' + self.processor_list.append(processor) + + def get_processor_list(self): + ''' + Get the processor list / chain + ''' + return self.processor_list + + def get_processor_by_name(self, name): + ''' + Get the processor by name or None if it's not found + ''' + for p in self.processor_list: + if p.get_name() == name: + return p + + def process_dict(self, local_dict): + ''' + Process the dictionary. It will go through all the process chain and it will be + returned after that. + ''' + for p in self.processor_list: + local_dict = p.do_process(local_dict) + return local_dict + + def write_dict(self, output_file, local_dict): + ''' + Write the dictionary into a file. It will be readable by using the execfile + function, which should be the same or similar format that the smashbox.conf.template + file has, and MUST be a valid smashbox.conf file + ''' + with open(output_file, 'w') as f: + for key in local_dict: + f.write('%s = %s\n' % (key, repr(local_dict[key]))) + + def generate_new_config(self, input_file, output_file): + ''' + Generate a new configuration file from the input_file. The input file should be + similar to the smashbox.conf.template. The processor chain must be set before + calling this function + ''' + input_globals = {} + input_locals = {} + execfile(input_file, input_globals, input_locals) + + input_locals = self.process_dict(input_locals) + self.write_dict(output_file, input_locals) + + def set_processors_from_data(self, processor_data): + ''' + Set the processor chain based on the data passed as parameter. Check the + _configgen variable in the smashbox.conf.template for working data + + The processor_data should be a dictionary-like. Due to the order of the processor + matters, an OrderedDict is recommended. + The keys of the dictionary are + the class name of the processor that will be used (from the + smashbox.configgen.processors module). Currently there are only 4 processors available. + The value for the key should also be a dictionary to initialize the processor. Only + one parameter will be passed, that's why a dictionary is recommended, although + what you must pass depends on the specific processor. + ''' + for key in processor_data: + if hasattr(processors, key): + processor_class = getattr(processors, key) + if not issubclass(processor_class, processors.BasicProcessor): + continue + values = processor_data[key] + processor = processor_class(values) + self.append_processor(processor) + else: + pass + + def process_data_to_file(self, data, output_file): + ''' + Process the data passed as parameter through the chain and write the result + to the file + ''' + data_to_output = self.process_dict(data) + self.write_dict(output_file, data_to_output) + diff --git a/python/smashbox/configgen/processors.py b/python/smashbox/configgen/processors.py new file mode 100644 index 0000000..722d5ad --- /dev/null +++ b/python/smashbox/configgen/processors.py @@ -0,0 +1,250 @@ +from collections import OrderedDict +import sys + +class ProcessorException(Exception): + ''' + Exception that will raise from processor classes + ''' + KEY_DATA_MISSING = 1 + REQUIRED_KEY_MISSING = 2 + UNKNOWN_CONVERSION = 3 + def __init__(self, error_code, message): + super(ProcessorException, self).__init__(message) + self.error_code = error_code + +def convert_string_to_type(string, totype): + ''' + Simple type converter based on our rules. Any processor class from this module or + any piece of code which might use this module (dictionaries for the OverwritterProcessor) + should use this function to convert types + ''' + if totype == 'int': + return int(string) + elif totype == 'float': + return float(string) + elif totype == 'None': + return None if string == 'None' else string + elif totype == 'bool': + return string == 'True' + elif totype == 'list': + return string.split(',') + raise ProcessorException(ProcessorException.UNKNOWN_CONVERSION, 'unknown conversion requested') + +class BasicProcessor(object): + ''' + Basic processor class. All processor should inherit from this class + + The class provides an observer mechanism to notify changes. Code can be hooked + using this system into the processor. Available notifications will depend on + the specific processor as well as when the notification is send + + This class also provide a simple way to obtain a default name for the processor + (implementations shouldn't need to change this, but it's possible), as well as + a simple way to ask for keys (looks enough for now). + ''' + def __init__(self, params): + self.observer_dict = OrderedDict() + self.params = params + + def register_observer(self, name, observer): + ''' + Register an observer with the appropiated name. The observer MUST have a "notify_me" + method accepting 3 parameters: the processor name, the event type (defined in each + processor) and the message (depending on the event type) + ''' + self.observer_dict[name] = observer + + def unregister_observer(self, name): + ''' + Unregister the observer with the specified name + ''' + del self.observer_dict[name] + + def _notify_observer(self, name, event_type, message): + ''' + This is an internal function for the class and shouldn't be call from outside + + Notify the specified observer (by name) with the corresponding event type and message + ''' + self.observer_dict[name].notify_me(self.get_name(), event_type, message) + + def _notify_all(self, event_type, message): + ''' + This is an internal function for the class and shouldn't be call from outside + + Notify all observers with the corresponding event type and message + ''' + for key in self.observer_dict: + self.observer_dict[key].notify_me(self.get_name(), event_type, message) + + def get_name(self): + ''' + Convenient function to get the name of the processor. The default is the class name + ''' + return self.__class__.__name__ + + def do_process(self, config_dict): + ''' + This method MUST be implemented in each subclass + ''' + raise NotImplementedError('method not implemented') + + def ask_for_key(self, key_name, help_text=None, default_value=None): + ''' + Ask for a key value interactively and return the user's input + ''' + if default_value is None: + default_value = '' + + if help_text is None: + sys.stdout.write('%s [%s] : ' % (key_name, default_value)) + else: + sys.stdout.write('%s -> %s\n[%s] : ' % (key_name, help_text, default_value)) + sys.stdout.flush() + + input_value = sys.stdin.readline().rstrip() + if not input_value and default_value is not None: + return default_value + else: + return input_value + + +class RequiredKeysProcessor(BasicProcessor): + ''' + Processor to fill required keys. All the keys are defined when this object is created. + + This processor accepts as parameter a dictionary with 2 keys: + * keylist -> containing info for the required keys + * ask -> True / False to decide to ask for missing keys or throw an exception + + The keylist info will contain (for now) something like: + {'keylist' : [{'name': 'the_name_of_the_key', + 'help_text': 'optional text to show as help', + 'default': 'the default value if the user doesn't input anything', + 'type': 'type conversion if needed (check convert_string_to_type function'}, + {......}]} + + Event list: + * EVENT_PROCESS_INIT when the process starts + * EVENT_PROCESS_FINISH when the process finish (just before return the result) + * EVENT_KEY_MODIFIED when a required key is modified + * EVENT_KEY_ALREADY_SET when a required key is already set and won't be modified + ''' + EVENT_PROCESS_INIT = 'process_init' + EVENT_PROCESS_FINISH = 'process_finish' + EVENT_KEY_MODIFIED = 'key_modified' + EVENT_KEY_ALREADY_SET = 'key_already_set' + + def set_ask(self, ask): + ''' + set the "ask" parameter dynamically + ''' + self.params['ask'] = ask + + def do_process(self, config_dict): + self._notify_all(self.EVENT_PROCESS_INIT, None) + + for key_data in self.params['keylist']: + if not 'name' in key_data: + # check that the key_data contains a name for the key + raise ProcessorException(ProcessorException.KEY_DATA_MISSING, + 'name attribute in the key data is missing') + + real_key = key_data['name'] + placeholder = key_data.get('help_text', None) + default_value = key_data.get('default', None) + + if real_key in config_dict: + self._notify_all(self.EVENT_KEY_ALREADY_SET, {'key': real_key, 'value': config_dict[real_key]}) + continue + # if the key exists jump to the next one + + if self.params['ask']: + value = self.ask_for_key(real_key, placeholder, default_value) + if 'type' in key_data: + value = convert_string_to_type(value, key_data['type']) + config_dict[real_key] = value + self._notify_all(self.EVENT_KEY_MODIFIED, {'key': real_key, 'value': value}) + else: + if not real_key in config_dict: + raise ProcessorException(ProcessorException.REQUIRED_KEY_MISSING, + 'required key is missing') + + self._notify_all(self.EVENT_PROCESS_FINISH, None) + + return config_dict + +class KeyRemoverProcessor(BasicProcessor): + ''' + Processor to remove keys. All the keys to be removed are defined when this object is created. + + This processor accepts as parameter a dictionary with 1 key: + * keylist -> containing a list of keys to be removed + + The keylist info will contain (for now) something like: + {'keylist' : ('key1', 'key2', 'key3', ....)} + + Event list: + * EVENT_PROCESS_INIT when the process starts + * EVENT_PROCESS_FINISH when the process finish (just before return the result) + * EVENT_KEY_DELETED when a key is deleted + ''' + EVENT_PROCESS_INIT = 'process_init' + EVENT_PROCESS_FINISH = 'process_finish' + EVENT_KEY_DELETED = 'key_deleted' + + def do_process(self, config_dict): + self._notify_all(self.EVENT_PROCESS_INIT, None) + for key in self.params['keylist']: + del config_dict[key] + self._notify_all(self.EVENT_KEY_DELETED, key) + self._notify_all(self.EVENT_PROCESS_FINISH, None) + return config_dict + +class SortProcessor(BasicProcessor): + ''' + Processor to sort keys. + + This processor doesn't require parameters so you can just set the param as None + + Event list: + * EVENT_PROCESS_INIT when the process starts + * EVENT_PROCESS_FINISH when the process finish (just before return the result) + ''' + EVENT_PROCESS_INIT = 'process_init' + EVENT_PROCESS_FINISH = 'process_finish' + + def do_process(self, config_dict): + self._notify_all(self.EVENT_PROCESS_INIT, None) + result = OrderedDict(sorted(config_dict.items(), key=lambda t: t[0])) + self._notify_all(self.EVENT_PROCESS_FINISH, None) + return result + +class OverwritterProcessor(BasicProcessor): + ''' + Processor to overwrite keys. + + This processor accepts as parameter a dictionary with 1 key: + * dict_to_merge -> containing a dictionary to update the configuration + + Event list: + * EVENT_PROCESS_INIT when the process starts + * EVENT_PROCESS_FINISH when the process finish (just before return the result) + * EVENT_BULK_UPDATE after updating the dictionary + ''' + EVENT_PROCESS_INIT = 'process_init' + EVENT_PROCESS_FINISH = 'process_finish' + EVENT_BULK_UPDATE = 'bulk_update' + + def set_dict_to_merge(self, merge_dict): + ''' + set the "dict_to_merge" parameter dynamically + ''' + self.params['dict_to_merge'] = merge_dict + + def do_process(self, config_dict): + self._notify_all(self.EVENT_PROCESS_INIT, None) + config_dict.update(self.params['dict_to_merge']) + self._notify_all(self.EVENT_BULK_UPDATE, self.params['dict_to_merge']) + self._notify_all(self.EVENT_PROCESS_FINISH, None) + return config_dict diff --git a/python/smashbox/configgen/processors_hooks.py b/python/smashbox/configgen/processors_hooks.py new file mode 100644 index 0000000..ab92189 --- /dev/null +++ b/python/smashbox/configgen/processors_hooks.py @@ -0,0 +1,10 @@ +class LoggingHook(object): + ''' + Simple logging hook for the processors to log what's happening + ''' + def __init__(self, logger, level): + self.logger = logger + self.level = level + + def notify_me(self, processor_name, event_type, message): + self.logger.log(self.level, '%s - %s : %s' % (processor_name, event_type, message)) diff --git a/python/smashbox/curl.py b/python/smashbox/curl.py index af79986..507ede1 100644 --- a/python/smashbox/curl.py +++ b/python/smashbox/curl.py @@ -34,8 +34,8 @@ def __init__(self,verbose=None): from logging import DEBUG self.verbose = config._loglevel <= DEBUG else: - self.verbose=verbose - + from logging import DEBUG + self.verbose = config._loglevel <= DEBUG c.setopt(c.VERBOSE, self.verbose) @@ -67,10 +67,9 @@ def PROPFIND(self,url,query,depth,parse_check=True,headers={}): logger.info('PROPFIND response body: %s',r.body_stream.getvalue()) if parse_check: - if 200 <= r.rc and r.rc < 300: # only parse the reponse type for positive responses - #TODO: multiple Content-Type response headers will confuse the client as well - fatal_check('application/xml; charset=utf-8' in r.headers['Content-Type'],'Wrong response header "Content-Type:%s"'%r.headers['Content-Type']) # as of client 1.7 and 1.8 - r.propfind_response=_parse_propfind_response(r.response_body,depth=depth) + #TODO: multiple Content-Type response headers will confuse the client as well + fatal_check('application/xml; charset=utf-8' in r.headers['Content-Type'],'Wrong response header "Content-Type:%s"'%r.headers['Content-Type']) # as of client 1.7 and 1.8 + r.propfind_response=_parse_propfind_response(r.body_stream.getvalue(),depth=depth) return r @@ -111,20 +110,17 @@ def GET(self,url,fn,headers={}): c = self.c - if fn: - f = open(fn,'w') - c.setopt(c.WRITEFUNCTION,f.write) - else: - body_stream = cStringIO.StringIO() - c.setopt(c.WRITEFUNCTION,body_stream.write) - - r = self._perform_request(url,headers) + def perform_request(write_callback): + c.setup(c.WRITEFUNCTION, write_callback) + return self._perform_request(url, headers) if fn: - f.close() - else: - r.response_body=body_stream.getvalue() + with open(fn, 'w') as f: + return perform_request(f.write) + body_stream = cStringIO.StringIO() + r = perform_request(f.write) + r.response_body = body_stream.getvalue() return r def MKCOL(self,url): diff --git a/python/smashbox/owncloudorg/__init__.py b/python/smashbox/owncloudorg/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/python/smashbox/owncloudorg/locking.py b/python/smashbox/owncloudorg/locking.py new file mode 100644 index 0000000..acdb888 --- /dev/null +++ b/python/smashbox/owncloudorg/locking.py @@ -0,0 +1,143 @@ +import owncloud + +__author__ = 'nickv' + + +class LockProvider: + LOCK_SHARED = 1 + LOCK_EXCLUSIVE = 2 + + def __init__(self, oc_api): + """ + :param oc_api owncloud.Client + """ + self.oc_api = oc_api + + def enable_testing_app(self): + try: + self.oc_api.make_ocs_request( + 'POST', + 'cloud', + 'apps/testing' + ) + except owncloud.ResponseError as err: + raise err + + def disable_testing_app(self): + try: + self.oc_api.make_ocs_request( + 'DELETE', + 'cloud', + 'apps/testing' + ) + except owncloud.ResponseError as err: + raise err + + def isUsingDBLocking(self): + try: + kwargs = {'accepted_codes': [100, 501, 999]} + res = self.oc_api.make_ocs_request( + 'GET', + 'apps/testing/api/v1', + 'lockprovisioning', + **kwargs + ) + + import xml.etree.ElementTree as ET + tree = ET.fromstring(res.content) + code_el = tree.find('meta/statuscode') + + return int(code_el.text) == 100 + + except owncloud.ResponseError as err: + raise err + + + def lock(self, lock_level, user, path): + """ + Lock the path for the given user + + :param lock_level: 1 (shared) or 2 (exclusive) + :param user: User to lock the path + :param path: Path to lock + :raises: ResponseError if the path could not be locked + """ + try: + self.oc_api.make_ocs_request( + 'POST', + 'apps/testing/api/v1', + 'lockprovisioning/%i/%s?path=%s' % (lock_level, user, path) + ) + except owncloud.ResponseError as err: + raise err + + def change_lock(self, lock_level, user, path): + """ + Change an existing lock + + :param lock_level: 1 (shared) or 2 (exclusive) + :param user: User to lock the path + :param path: Path to lock + :raises: ResponseError if the lock could not be changed + """ + try: + self.oc_api.make_ocs_request( + 'PUT', + 'apps/testing/api/v1', + 'lockprovisioning/%i/%s?path=%s' % (lock_level, user, path) + ) + except owncloud.ResponseError as err: + raise err + + def is_locked(self, lock_level, user, path): + """ + Check whether the path is locked + + :param lock_level: 1 (shared) or 2 (exclusive) + :param user: User to lock the path + :param path: Path to lock + :returns bool + """ + try: + kwargs = {'accepted_codes': [100, 423]} + res = self.oc_api.make_ocs_request( + 'GET', + 'apps/testing/api/v1', + 'lockprovisioning/%i/%s?path=%s' % (lock_level, user, path), + **kwargs + ) + + import xml.etree.ElementTree as ET + tree = ET.fromstring(res.content) + code_el = tree.find('meta/statuscode') + + return int(code_el.text) == 100 + + except owncloud.ResponseError as err: + raise err + + def unlock(self, lock_level=None, user=None, path=None): + """ + Remove all set locks + + :param lock_level: 1 (shared) or 2 (exclusive) + :param user: User to unlock the path + :param path: Path to unlock + :raises: ResponseError if the lock could not be removed + """ + ocs_path = 'lockprovisioning' + + if lock_level is not None: + ocs_path = '%s/%i' % (ocs_path, lock_level) + + if user is not None: + ocs_path = '%s/%s?path=%s' % (ocs_path, user, path) + + try: + self.oc_api.make_ocs_request( + 'DELETE', + 'apps/testing/api/v1', + ocs_path + ) + except owncloud.ResponseError as err: + raise err diff --git a/python/smashbox/owncloudorg/remote_sharing.py b/python/smashbox/owncloudorg/remote_sharing.py new file mode 100644 index 0000000..e04f0a8 --- /dev/null +++ b/python/smashbox/owncloudorg/remote_sharing.py @@ -0,0 +1,84 @@ +from owncloud import HTTPResponseError +from smashbox.script import config +from smashbox.utilities import * + + +def remote_share_file_with_user(filename, sharer, sharee, **kwargs): + """ Shares a file with a user + + :param filename: name of the file being shared + :param sharer: the user doing the sharing + :param sharee: the user receiving the share + :param kwargs: key words args to be passed into the api, usually for share permissions + :returns: share id of the created share + + """ + from owncloud import ResponseError + + logger.info('%s is sharing file %s with user %s', sharer, filename, sharee) + + oc_api = get_oc_api() + oc_api.login(sharer, config.oc_account_password) + + kwargs.setdefault('remote_user', True) + sharee = "%s@%s" % (sharee, oc_api.url) + + try: + share_info = oc_api.share_file_with_user(filename, sharee, **kwargs) + logger.info('share id for file share is %s', str(share_info.share_id)) + return share_info.share_id + except ResponseError as err: + logger.info('Share failed with %s - %s', str(err), str(err.get_resource_body())) + if err.status_code == 403 or err.status_code == 404: + return -1 + else: + return -2 + + +def list_open_remote_share(sharee): + """ Accepts a remote share + + :param sharee: user who created the original share + """ + logger.info('Listing remote shares for user %s', sharee) + + oc_api = get_oc_api() + oc_api.login(sharee, config.oc_account_password) + try: + open_remote_shares = oc_api.list_open_remote_share() + except HTTPResponseError as err: + logger.error('Share failed with %s - %s', str(err), str(err.get_resource_body())) + if err.status_code == 403 or err.status_code == 404: + return -1 + else: + return -2 + + return open_remote_shares + + +def accept_remote_share(sharee, share_id): + """ Accepts a remote share + + :param sharee: user who created the original share + :param share_id: id of the share to be accepted + + """ + logger.info('Accepting share %i for user %s', share_id, sharee) + + oc_api = get_oc_api() + oc_api.login(sharee, config.oc_account_password) + error_check(oc_api.accept_remote_share(share_id), 'Accepting remote share failed') + + +def decline_remote_share(sharee, share_id): + """ Delines a remote share + + :param sharer: user who created the original share + :param share_id: id of the share to be declined + + """ + logger.info('Declining share %i from user %s', share_id, sharee) + + oc_api = get_oc_api() + oc_api.login(sharee, config.oc_account_password) + error_check(oc_api.decline_remote_share(share_id), 'Accepting remote share failed') \ No newline at end of file diff --git a/python/smashbox/utilities/__init__.py b/python/smashbox/utilities/__init__.py index 7655dde..e93dc35 100644 --- a/python/smashbox/utilities/__init__.py +++ b/python/smashbox/utilities/__init__.py @@ -8,9 +8,12 @@ import platform import shutil import re +import requests +import glob # Utilities to be used in the test-cases. from smashbox.utilities.version import version_compare +from smashbox.utilities.monitoring import push_to_monitoring def compare_oc_version(compare_to, operator): @@ -72,9 +75,10 @@ def setup_test(): reset_owncloud_account(num_test_users=config.oc_number_test_users) reset_rundir() reset_server_log_file() + reset_diagnostics() -def finalize_test(): +def finalize_test(returncode, total_duration): """ Finalize hooks run after last worker terminated. This is run under the name of the "supervisor" worker. @@ -86,9 +90,13 @@ def finalize_test(): """ d = make_workdir() scrape_log_file(d) + push_to_monitoring(returncode, total_duration) ######### HELPERS +def log_info(message): + logger.info(message) + def reset_owncloud_account(reset_procedure=None, num_test_users=None): """ Prepare the test account on the owncloud server (remote state). Run this once at the beginning of the test. @@ -292,6 +300,12 @@ def create_owncloud_group(group_name): oc_api.login(config.oc_admin_user, config.oc_admin_password) oc_api.create_group(group_name) +def get_conflict_files(d): + conflict_files = [] + conflict_files.extend(glob.glob(os.path.join(d,'*_conflict-*-*'))) # earlier versions use _conflic + conflict_files.extend(glob.glob(os.path.join(d,'*conflicted copy*'))) # since 2.4 conflic files are marked differently + + return conflict_files ######### WEBDAV AND SYNC UTILITIES ##################### @@ -302,6 +316,9 @@ def oc_webdav_url(protocol='http',remote_folder="",user_num=None,webdav_endpoint if config.oc_ssl_enabled: protocol += 's' + # strip-off any leading / characters to prevent 1) abspath result from the join below, 2) double // and alike... + remote_folder = remote_folder.lstrip('/') + if webdav_endpoint is None: webdav_endpoint = config.oc_webdav_endpoint @@ -325,7 +342,6 @@ def oc_webdav_url(protocol='http',remote_folder="",user_num=None,webdav_endpoint def ocsync_version(): """ Return the version reported by oc_sync_cmd. - Returns a tuple (major,minor,bugfix). For example: (1,7,2) or (2,1,1) """ @@ -343,11 +359,35 @@ def ocsync_version(): return tuple([int(x) for x in version.split(".")]) + +def oc_public_webdav_url(protocol='http',remote_folder="",token='',password=''): + """ Get public Webdav URL + """ + + if config.oc_ssl_enabled: + protocol += 's' + + # strip-off any leading / characters to prevent 1) abspath result from the join below, 2) double // and alike... + remote_folder = remote_folder.lstrip('/') + + remote_path = os.path.join(config.oc_root, 'public.php/webdav', remote_folder) + + creds = '' + if token: + creds = token + if password: + creds += ':' + password + + if creds: + creds += '@' + + return protocol + '://' + creds + config.oc_server + '/' + remote_path + # this is a local variable for each worker that keeps track of the repeat count for the current step ocsync_cnt = {} -def run_ocsync(local_folder, remote_folder="", n=None, user_num=None, timeout_min=5): +def run_ocsync(local_folder, remote_folder="", n=None, user_num=None, use_new_dav_endpoint=False, timeout_min=5): """ Run the ocsync for local_folder against remote_folder (or the main folder on the owncloud account if remote_folder is None). Repeat the sync n times. If n given then n -> config.oc_sync_repeat (default 1). """ @@ -364,6 +404,12 @@ def run_ocsync(local_folder, remote_folder="", n=None, user_num=None, timeout_mi if platform.system() != "Windows": local_folder += os.sep # FIXME: HACK - is a trailing slash really needed by 1.6 owncloudcmd client? + # Force using old endpointif required, for owncloud client it is done by disabling chunking ng + if not use_new_dav_endpoint: + os.environ["OWNCLOUD_CHUNKING_NG"] = "0" + elif "OWNCLOUD_CHUNKING_NG" in os.environ: + del os.environ['OWNCLOUD_CHUNKING_NG'] + for i in range(n): t0 = datetime.datetime.now() cmd = config.oc_sync_cmd+[local_folder,oc_webdav_url('owncloud',remote_folder,user_num)] @@ -420,6 +466,9 @@ def expect_webdav_does_not_exist(path, user_num=None): error_check(r.rc >= 400,"Remote path exists: %s" % path) # class 4xx response is OK def expect_webdav_exist(path, user_num=None): + exitcode,stdout,stderr = runcmd('curl -s -k %s -XPROPFIND %s | xmllint --format - | grep NotFound | wc -l'%(config.get('curl_opts',''),oc_webdav_url(remote_folder=path, user_num=user_num))) + exists = stdout.rstrip() == "0" + error_check(exists, "Remote path %s exists but should not" % path) r = _prop_check(path,user_num) error_check(200 <= r.rc and r.rc < 300,"Remote path does not exist: %s" % path) # class 2xx response is OK @@ -545,9 +594,13 @@ def mv(a,b): logger.info("move %s %s",a,b) shutil.move(a, b) +import fnmatch +def remove_db_in_folder(path): + for file in os.listdir(path): + if fnmatch.fnmatch(file, '*.db'): + remove_file(os.path.join(path, file)) def list_files(path,recursive=False): - if platform.system() == 'Windows': runcmd('dir /s /b ' + path) return @@ -710,46 +763,107 @@ def fatal_check(expr,message=""): # ###### Server Log File Scraping ############ -def reset_server_log_file(): +def reset_server_log_file(force = False): """ Deletes the existing server log file so that there is a clean log file for the test run """ - try: - if not config.oc_check_server_log: + if not force: + try: + if not config.oc_check_server_log: + return + except AttributeError: # allow this option not to be defined at all return - except AttributeError: # allow this option not to be defined at all - return logger.info('Removing existing server log file') cmd = '%s rm -rf %s/owncloud.log' % (config.oc_server_shell_cmd, config.oc_server_datadirectory) runcmd(cmd) +def reset_diagnostics(force = False): + """ Deletes the existing server log file so that there is a clean + log file for the test run, and set neccesairly flags on the server for diagnostic log generation + """ + + if not force: + try: + if not config.oc_check_diagnostic_log: + return + except AttributeError: # allow this option not to be defined at all + return + + logger.info('Initializing diagnostic log file') + log_url = 'http' + if config.oc_ssl_enabled: + log_url += 's' + log_url += '://' + config.oc_admin_user + ':' + config.oc_admin_password + '@' + config.oc_server + + clean_log_url = log_url + '/' + os.path.join(config.oc_root, 'index.php/apps/diagnostics/log/clean') + res = requests.post(clean_log_url) + + fatal_check(res.status_code == 200, 'Could not clean the diagnostic log file from the server, status code %i' % res.status_code) + fatal_check(res.text == "null", 'Diagnostic app seems disabled, returned body %s' % res.text) + +def parse_log_file_lines(res): + data = [] + if res is not None: + import json + for line in res.iter_lines(): + data.append(json.loads(line)) + return data + +def get_diagnostic_log(force = False): + """ + Obtains server diagnostic log in JSON format and parses it + """ + + if not force: + try: + if not config.oc_check_diagnostic_log: + return [] + except AttributeError: # allow this option not to be defined at all + return [] + logger.info('Obtaining diagnostic log file') + log_url = 'http' + if config.oc_ssl_enabled: + log_url += 's' + log_url += '://' + config.oc_admin_user + ':' + config.oc_admin_password + '@' + config.oc_server + + dwn_log_url = log_url + '/' + os.path.join(config.oc_root, 'index.php/apps/diagnostics/log/download') + res = requests.get(dwn_log_url) -def scrape_log_file(d): + fatal_check(res.status_code == 200, 'Could not download the diagnostic log file from the server, status code %i' % res.status_code) + return parse_log_file_lines(res) + +def scrape_log_file(d, force = False): """ Copies over the server log file and searches it for specific strings :param d: The directory where the server log file is to be copied to """ - try: - if not config.oc_check_server_log: + if not force: + try: + if not config.oc_check_server_log: + return + except AttributeError: # allow this option not to be defined at all return - except AttributeError: # allow this option not to be defined at all - return - if config.oc_server == '127.0.0.1' or config.oc_server == 'localhost': - cmd = 'cp %s/owncloud.log %s/.' % (config.oc_server_datadirectory, d) - else: - try: - log_user = config.oc_server_log_user - except AttributeError: # allow this option not to be defined at all - log_user = 'root' - cmd = 'scp -P %d %s@%s:%s/owncloud.log %s/.' % (config.scp_port, log_user, config.oc_server, config.oc_server_datadirectory, d) - rtn_code,stdout,stderr = runcmd(cmd) - error_check(rtn_code > 0, 'Could not copy the log file from the server, command returned %s' % rtn_code) + # download server log + log_url = 'http' + if config.oc_ssl_enabled: + log_url += 's' + log_url += '://' + config.oc_admin_user + ':' + config.oc_admin_password + '@' + config.oc_server + log_url += '/' + os.path.join(config.oc_root, 'index.php/settings/admin/log/download') + + res = requests.get(log_url) + + fatal_check(res.status_code == 200, 'Could not download the log file from the server, status code %i' % res.status_code) + + file_handle = open(os.path.join(d, 'owncloud.log'), 'wb', 8192) + for chunk in res.iter_content(8192): + file_handle.write(chunk) + file_handle.close() # search logfile for string (1 == not found; 0 == found): cmd = "grep -i \"integrity constraint violation\" %s/owncloud.log" % d @@ -760,6 +874,10 @@ def scrape_log_file(d): rtn_code,stdout,stderr = runcmd(cmd, ignore_exitcode=True, log_warning=False) error_check(rtn_code > 0, "\"Exception\" message found in server log file") + cmd = "grep -i \"Error\" %s/owncloud.log" % d + rtn_code,stdout,stderr = runcmd(cmd, ignore_exitcode=True, log_warning=False) + error_check(rtn_code > 0, "\"Error\" message found in server log file") + cmd = "grep -i \"could not obtain lock\" %s/owncloud.log" % d rtn_code,stdout,stderr = runcmd(cmd, ignore_exitcode=True, log_warning=False) error_check(rtn_code > 0, "\"Could Not Obtain Lock\" message found in server log file") @@ -787,7 +905,7 @@ def get_oc_api(): protocol += 's' url = protocol + '://' + config.oc_server + '/' + config.oc_root - oc_api = owncloud.Client(url, verify_certs=False) + oc_api = owncloud.Client(url, verify_certs=False, dav_endpoint_version=use_new_dav_endpoint) return oc_api @@ -933,39 +1051,3 @@ def expect_does_not_exist(fn): """ error_check(not os.path.exists(fn), "File %s exists but should not" % fn) - -############ Helper functions to report/document the behaviour of the tests ############ - -def do_not_report_as_failure(Issue=""): - config._test_ignored = Issue - -############ Smashbox Exceptions ############ - - -class Error(Exception): - """Base class for exceptions in this module.""" - pass - -class SkipTestExecutionException(Error): - """Exception raised for unexpected errors/bugs in the execution engine of smashbox while executing a test. - - Attributes: - message -- explanation of the error - """ - - def __init__(self, message): - self.message = message - - -def restrict_execution(current_platform="",client_version="",endpoint="",disable=False): - - if disable: - raise SkipTestExecutionException("Skipped Test: this test is not fully automated") - else: - text_message = "Skipped Test: specific test designed for: " - if platform.system() != current_platform: - raise SkipTestExecutionException(text_message + current_platform) - elif client_version!="" and str(str(ocsync_version())[1:-1].replace(", ","."))!=client_version: - raise SkipTestExecutionException(text_message + client_version) - elif endpoint!="" and config.oc_server != endpoint: - raise SkipTestExecutionException(text_message + endpoint) diff --git a/python/smashbox/utilities/monitoring.py b/python/smashbox/utilities/monitoring.py index a020f86..3af21ad 100644 --- a/python/smashbox/utilities/monitoring.py +++ b/python/smashbox/utilities/monitoring.py @@ -1,16 +1,101 @@ -from smashbox.utilities import * +from smashbox.utilities import reflection, config, os +import smashbox.utilities -# simple monitoring to grafana (disabled if not set in config) +def push_to_local_monitor(metric, value): + print metric, value -def push_to_monitoring(metric,value,timestamp=None): +def commit_to_monitoring(metric,value,timestamp=None): + shared = reflection.getSharedObject() + if not 'monitoring_points' in shared.keys(): + shared['monitoring_points'] = [] - monitoring_host=config.get('monitoring_host',None) - monitoring_port=config.get('monitoring_port',2003) + # Create monitoring metric point + monitoring_point = dict() + monitoring_point['metric'] = metric + monitoring_point['value'] = value + monitoring_point['timestamp'] = timestamp - if not monitoring_host: - return + # Append metric to shared object + monitoring_points = shared['monitoring_points'] + monitoring_points.append(monitoring_point) + shared['monitoring_points'] = monitoring_points - if not timestamp: - timestamp = time.time() +def handle_local_push(returncode, total_duration, monitoring_points): + for monitoring_point in monitoring_points: + push_to_local_monitor(monitoring_point['metric'], monitoring_point['value']) + push_to_local_monitor("returncode", returncode) + push_to_local_monitor("elapsed", total_duration) - os.system("echo '%s %s %s' | nc %s %s"%(metric,value,timestamp,monitoring_host,monitoring_port)) +def handle_prometheus_push(returncode, total_duration, monitoring_points): + monitoring_endpoint = config.get('endpoint', None) + release = config.get('owncloud', None) + client = config.get('client', None) + suite = config.get('suite', None) + build = config.get('build', None) + duration_label = config.get('duration_label', None) + queries_label = config.get('queries_label', None) + + points_to_push = [] + + # total duration is default for jenkins if given + if duration_label is not None: + points_to_push.append('# TYPE %s gauge' % (duration_label)) + points_to_push.append('%s{owncloud=\\"%s\\",client=\\"%s\\",suite=\\"%s\\",build=\\"%s\\",exit=\\"%s\\"} %s' % ( + duration_label, + release, + client, + suite, + build, + returncode, + total_duration)) + + # No. queries is default for jenkins if given + if queries_label is not None: + no_queries = 0 + res_diagnostic_logs = smashbox.utilities.get_diagnostic_log() + for diagnostic_log in res_diagnostic_logs: + if 'diagnostics' in diagnostic_log and 'totalSQLQueries' in diagnostic_log['diagnostics']: + no_queries += int(diagnostic_log['diagnostics']['totalSQLQueries']) + + points_to_push.append('# TYPE %s gauge' % (queries_label)) + points_to_push.append('%s{owncloud=\\"%s\\",client=\\"%s\\",suite=\\"%s\\",build=\\"%s\\",exit=\\"%s\\"} %s' % ( + queries_label, + release, + client, + suite, + build, + returncode, + no_queries)) + + # Export all commited monitoring points + for monitoring_point in monitoring_points: + points_to_push.append('# TYPE %s gauge' % (monitoring_point['metric'])) + points_to_push.append('%s{owncloud=\\"%s\\",client=\\"%s\\",suite=\\"%s\\",build=\\"%s\\",exit=\\"%s\\"} %s' % ( + monitoring_point['metric'], + release, + client, + suite, + build, + returncode, + monitoring_point['value'])) + + # Push to monitoring all points to be pushed + cmd = '' + for point_to_push in points_to_push: + cmd += point_to_push + '\n' + + monitoring_cmd = 'echo "%s" | curl --data-binary @- %s\n' % (cmd, monitoring_endpoint) + os.system(monitoring_cmd) + smashbox.utilities.log_info('Pushing to monitoring: %s' % monitoring_cmd) + +def push_to_monitoring(returncode, total_duration): + monitoring_points = [] + shared = reflection.getSharedObject() + if 'monitoring_points' in shared.keys(): + monitoring_points = shared['monitoring_points'] + + monitoring_type = config.get('monitoring_type', None) + if monitoring_type == 'prometheus': + handle_prometheus_push(returncode, total_duration, monitoring_points) + elif monitoring_type == 'local': + handle_local_push(returncode, total_duration, monitoring_points) \ No newline at end of file diff --git a/travis/check-syntax.py b/travis/check-syntax.py new file mode 100644 index 0000000..c555751 --- /dev/null +++ b/travis/check-syntax.py @@ -0,0 +1,5 @@ +import py_compile +import sys + +sys.stderr = sys.stdout +py_compile.compile(sys.argv[1]) diff --git a/travis/check-syntax.sh b/travis/check-syntax.sh new file mode 100755 index 0000000..8e52221 --- /dev/null +++ b/travis/check-syntax.sh @@ -0,0 +1,16 @@ +#!/bin/bash + +exitCode=0 +for FILE in $(find ../ -name "*.py" -type f -not -path "*/.git/*") +do + errors=$(python travis/check-syntax.py $FILE) + if [ "$errors" != "" ] + then + echo -n "${errors}" + exitCode=1 + fi +done + +echo "" + +exit $exitCode