Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port relevant changes from ownCloud fork #139

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 149 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,84 @@ The goal of this is to:
If you think you see a bug - write a test-case and let others
reproduce it on their systems.

This is work in progress.
Quickstart
==========

- Find your localhost owncloud server ip using e.g. `ipconfig`
- Execute smashbox run over that server e.g. `172.16.12.112:80/octest`. Ensure to mount `smashdir` and `tmp` directory to local filesystem to be able to debug test run and cache client build
- Smash wrapper will check if test nplusone exists in `lib` folder under `test_[name].py` scheme
```
docker run \
-e SMASHBOX_URL=<ip>:<port>/<path-to-oc> \
-e SMASHBOX_USERNAME=admin \
-e SMASHBOX_PASSWORD=admin \
-e SMASHBOX_ACCOUNT_PASSWORD=admin \
-e SMASHBOX_TEST_NAME=nplusone \
-v ~/smashdir:/smashdir \
-v /tmp:/tmp \
owncloud/smashbox:build
```
- Check run logs
```
$ cat ~/smashdir/log-test_nplusone.log | grep error (..warning, critical etc)
```
- Check client logs
```
$ cat ~/smashdir/test_nplusone/worker0-ocsync.step01.cnt000.log | grep error (..warning, critical etc)
```
- Check sync client directories of workers
```
$ ls ~/smashdir/test_nplusone/worker1/
```
- You can also run whole integration tests suite in docker for you server
```
./bin/run_all_integration.sh 172.16.12.112:80/octest
```

Important integration tests
===========================

* [Basic Sync and Conflicts ](lib/test_basicSync.py)
- basicSync_filesizeKB from 1kB to 50MB (normal and chunked files sync)
- basicSync_rmLocalStateDB removing local database in the test (index 0-3) or not (index 4-7)
* [Concurrently removing directory while files are being added ](lib/test_concurrentDirRemove.py)
- Currently only checks for corrupted files in the outcome
- Removing the directory while a large file is chunk-uploaded (index 0)
- Removing the directory while lots of smaller files are uploaded (index 1)
- Removing the directory before files are uploaded (index 2)
* [Resharing ](lib/oc-test/test_reshareDir.py)
- Share directory with receiver and receiver reshares one of the files with another user
* [Directory Sharing between users ](lib/oc-test/test_shareDir.py)
- Tests various sharing actions between users
* [Files Sharing between users ](lib/oc-test/test_shareFile.py)
- Tests various sharing actions between users
* [Files Sharing between users and groups ](lib/oc-test/test_shareGroup.py)
- Tests various sharing actions between users and groups
* [Files Sharing by link ](lib/oc-test/test_shareLink.py)
- Tests various sharing actions with links
* [Ensures correct behaviour having different permissions ](lib/oc-test/test_sharePermissions.py)
- Tests various sharing actions having share permissions
* [Ensures correct etag propagation 1](lib/owncloud/test_sharePropagationGroups.py)
- Tests etag propagation sharing/resharing between groups of users
* [Ensures correct etag propagation 2](lib/owncloud/test_sharePropagationInsideGroups.py)
- Tests etag propagation sharing/resharing between groups of users
* [Syncing shared mounts](lib/owncloud/test_shareMountInit.py)
- Test is oriented on syncing share mount in most sharing cases

Important performance tests
===========================

* [Upload/Download of small/big files](lib/test_nplusone.py)
- Test should monitor upload/download sync time in each of the scenarious (TODO)
- Test (index 0) verifies performance of many small files - 100 files - each 1kB
- Test (index 1) verifies performance of 1 big over-chunking-size file of total size 60MB
* [Shared Mount Performance](lib/owncloud/test_shareMountInit.py)
- PROPFIND on root folder - initialize mount points (initMount is done only on 1st propfind on received shares)
- PROPFIND on root folder with initialized content and mount points
- PUT to non-shared folder
- PUT to shared folder
- GET to non-shared folder
- GET to shared folder

Project tree
============
Expand Down Expand Up @@ -96,6 +173,8 @@ Location of sync client may be configured like this:
Installation
============

Note: Currently this framework works on Unix-like systems only. Windows port is needed.

Clone git repository into your local ``smashbox`` directory.

Copy the etc/smashbox.conf.template into etc/smashbox.conf
Expand Down Expand Up @@ -140,7 +219,10 @@ Examples:

# basic test
bin/smash lib/test_basicSync.py


# basic test, specifying test number as specified in tests' `testsets` array
bin/smash -t 0 lib/test_basicSync.py

# run a test with different paremeters
bin/smash -o nplusone_nfiles=10 lib/test_nplusone.py

Expand All @@ -149,6 +231,71 @@ Examples:

You will find main log files in ~/smashdir/log* and all temporary files and detailed logs for each test-case in ~/smashdir/<test-case>

Monitoring integration
=======================

Currently, monitoring module is supporting `local` and `prometheus` endpoints. Prometheus endpoint can be used in integration with Jenkins.

By default, two values are prepared for export, 'total_duration' and 'number_of_queries', however one can embed inside the test their custom variables using e.g. `commit_to_monitoring("download_duration",time1-time0)` inside `lib/test_nplusone.py` test.

**NOTE: To enable checking number of queries, one need to set `oc_check_diagnostic_log = True` in the `smashbox.conf` file**

**NOTE: To enable diagnostics in SUMMARY level on the server one need to go to the server directory e.g. `/var/www/owncloud` and:**

```
git clone https://github.com/owncloud/diagnostics apps/diagnostics
sudo -u www-data php occ app:enable diagnostics
sudo -u www-data php occ config:system:set --value true debug
sudo -u www-data php occ config:app:set --value 1 diagnostics diagnosticLogLevel
```

**Export to local monitor example:**

Executing

```
bin/smash -t 1 -o monitoring_type=local lib/test_nplusone.py
```

will execute index `1` of `test_nplusone` test and adding option flag `-o monitoring_type=local` will result in the below output if test has been completed successfully

```
download_duration 0.750847816467
upload_duration 1.4001121521
returncode 0
elapsed 6.87230300903
```

or below in case of failure

```
returncode 2
elapsed 7.0446870327
```

**Export to prometheus with jenkins example:**

Executing

```
bin/smash -t 1 -o monitoring_type=prometheus -o endpoint=http://localhost:9091/metrics/job/jenkins/instance/smashbox -o duration_label=jenkins_smashbox_test_duration -o queries_label=jenkins_smashbox_db_queries -o owncloud=daily-master -o client=2.3.1 -o suite=nplusonet1 -o build=test_build1 lib/test_nplusone.py
```

will result in:
* pushing the monitoring points to the Prometheus endpoint `http://localhost:9091/metrics/job/jenkins/instance/smashbox`
* Adding flags `-o duration_label=jenkins_smashbox_test_duration` and `-o queries_label=jenkins_smashbox_db_queries` will cause default results 'total_duration' and 'number_of_queries' to be exported to Prometheus.
* Additional flags `-o owncloud=daily-master`, `-o client=2.3.1`, `-o suite=nplusonet1`, `-o build=test_build1` can be used in order to distinguish smashbox runs

or below in case of failure to push to monitoring

`curl: (7) Failed to connect to localhost port 9091: Connection refused`

**Adding custom monitoring endpoint:**

One can add their own monitoring endpoint by [adding new option](python/smashbox/utilities/monitoring.py) in `push_to_monitoring`. You can test your custom test (as in [test_nplusone](lib/test_nplusone.py)) and monitoring endpoint setting flag
`-o monitoring_type=MY_CUSTOM_MONITORING_TYPE` e.g. `-o monitoring_type=local`

=======

Different client/server
=======================
Expand Down
87 changes: 87 additions & 0 deletions bin/config_gen
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
#!/usr/bin/env python

import sys, os.path
# insert the path to cernafs based on the relative position of this scrip inside the service directory tree
exeDir = os.path.abspath(os.path.normpath(os.path.dirname(sys.argv[0])))
pythonDir = os.path.join(os.path.dirname(exeDir), 'python' )
sys.path.insert(0, pythonDir)
etcDir = os.path.join(os.path.dirname(exeDir), 'etc')
defaultTemplateFile = os.path.join(etcDir, 'smashbox.conf.template-owncloud')
defaultOutputFile = os.path.join(etcDir, 'smashbox.conf')

import smashbox.configgen.generator as generator
import smashbox.configgen.processors as processors
from smashbox.configgen.processors_hooks import LoggingHook
import logging
import argparse
import json

parser = argparse.ArgumentParser(description='Config generator for smashbox')
parser.add_argument('-i', default=defaultTemplateFile, help='template file to be used', dest='input_file')
parser.add_argument('-o', default=defaultOutputFile, help='output file', dest='output_file')
group = parser.add_mutually_exclusive_group()
group.add_argument('--no-ask', default=None, action='store_false', help='don\'t ask for required keys', dest='ask_keys')
group.add_argument('--ask', default=None, action='store_true', help='ask for required keys', dest='ask_keys')
parser.add_argument('-k', default=[], action='append', required=False, help='key=value pairs', dest='keys')
parser.add_argument('-kt', default=[], action='append', required=False, help='key=type pairs', dest='key_types')
parser.add_argument('--key-value-file', help='json file containing key-value pairs. The file format should something like {keyname: {value: value, type: type}, oc_server: {value: server.com, type: string}, oc_ssl_enable: {value: True, type: bool}}')
parser.add_argument('--logfile', help='write logs in this file')
args = parser.parse_args()

global_vars = {}
local_vars = {}
with open(args.input_file) as ifile:
code = compile(ifile.read(), args.input_file, 'exec')
exec(code, global_vars, local_vars)

overwrite_dict = {}

if args.key_value_file:
with open(args.key_value_file, 'r') as f:
data = json.load(f)
if type(data) is dict:
for data_element in data:
key = data_element
value = str(data[data_element]['value'])
if 'type' in data[data_element]:
value = processors.convert_string_to_type(value, data[data_element]['type'])
overwrite_dict[key] = value

# convert the keys argument to a dictionary
key_list = [item.split('=', 1) for item in args.keys]
key_dict = dict(key_list)

# convert the key_types to [[key, type],[key, type]] and change the type of the values
key_type_list = [item.split('=', 1) for item in args.key_types]
for keytype in key_type_list:
if keytype[0] in key_dict:
key_dict[keytype[0]] = processors.convert_string_to_type(key_dict[keytype[0]], keytype[1])
overwrite_dict.update(key_dict)

config_generator = generator.Generator()
config_generator.set_processors_from_data(local_vars['_configgen'])

if args.ask_keys is not None:
processor = config_generator.get_processor_by_name('RequiredKeysProcessor')
if processor is not None:
processor.set_ask(args.ask_keys)

if overwrite_dict:
# we need to overwrite keys
processor2 = config_generator.get_processor_by_name('OverwritterProcessor')
if processor2 is not None:
processor2.set_dict_to_merge(overwrite_dict)

# setup logging for each processor
if args.logfile:
logging.basicConfig(level=logging.NOTSET, format='%(asctime)-15s %(levelname)s %(name)s : %(message)s', filename=args.logfile)
for p in config_generator.get_processor_list():
processor_name = p.get_name()
logger = logging.getLogger('%s.%s' % (__name__, processor_name))
p.register_observer('logger', LoggingHook(logger, logging.INFO))

logging.getLogger(__name__).info('ready to start the generation')

# generate the config file
config_generator.process_data_to_file(local_vars, args.output_file)

28 changes: 28 additions & 0 deletions bin/run_all_integration.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#!/bin/bash

if [ -z "$1" ] ; then
echo "Running from etc/smashbox.conf locally"

if [ ! -f requirements.txt ]; then
echo "bin/smash not found in this directory, cd to smashbox root dir then run this script"
exit 1
fi

CMD="bin/smash -v -a"
else
echo "Running from in docker again server ip $1"
CMD="docker run -e SMASHBOX_URL=$1 -e SMASHBOX_USERNAME=admin -e SMASHBOX_PASSWORD=admin -e SMASHBOX_ACCOUNT_PASSWORD=admin owncloud/smashbox"
fi

$CMD lib/test_basicSync.py && \
$CMD lib/test_concurrentDirRemove.py && \
$CMD lib/test_nplusone.py && \
$CMD lib/oc-tests/test_reshareDir.py && \
$CMD lib/oc-tests/test_shareDir.py && \
$CMD lib/oc-tests/test_shareFile.py && \
$CMD lib/oc-tests/test_shareGroup.py && \
$CMD lib/oc-tests/test_shareLink.py && \
$CMD lib/oc-tests/test_sharePermissions.py && \
$CMD lib/owncloud/test_shareMountInit.py && \
$CMD lib/owncloud/test_sharePropagationGroups.py && \
$CMD lib/owncloud/test_sharePropagationInsideGroups.py
5 changes: 5 additions & 0 deletions etc/smashbox.conf.template
Original file line number Diff line number Diff line change
Expand Up @@ -138,3 +138,8 @@ oc_server_log_user = "www-data"
# Reset the server log file and verify that no exceptions and other known errors have been logged
#
oc_check_server_log = False

#
# Reset the diagnostic log file and use diagnostics for assertions
#
oc_check_diagnostic_log = False
Loading