Skip to content

Commit 0ce6ac8

Browse files
authored
Merge pull request #93 from flask-dashboard/refactor
Refactor
2 parents aec3370 + 2a2e264 commit 0ce6ac8

File tree

132 files changed

+3010
-49717
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

132 files changed

+3010
-49717
lines changed

.travis.yml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,11 @@ python:
66
- "3.5"
77
- "3.6"
88

9+
install:
10+
- pip install codecov
11+
912
script:
10-
- python setup.py test
13+
- coverage run setup.py test
14+
15+
after_success:
16+
- codecov

CHANGELOG.rst

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,19 @@ Please note that the changes before version 1.10.0 have not been documented.
77

88
Unreleased
99
----------
10+
Changed
11+
12+
- Removed two graphs: hits per hour and execution time per hour
13+
14+
- New template
15+
16+
- Refactored code
1017

18+
Fixed issues:
19+
- #63
20+
- #80
21+
- #89
22+
-
1123

1224
v1.11.0
1325
-------

MANIFEST.in

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
recursive-include flask_monitoringdashboard/static *
22
recursive-include flask_monitoringdashboard/templates *
3+
recursive-include flask_monitoringdashboard/test *
34
include requirements.txt
45
include README.md
56
include CHANGELOG.rst

README.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,10 @@
11
# Flask Monitoring Dashboard
2+
[![Build Status](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard)
3+
[![Documentation Status](https://readthedocs.org/projects/flask-monitoringdashboard/badge/?version=latest)](http://flask-monitoringdashboard.readthedocs.io/en/latest/?badge=latest)
4+
[![codecov](https://codecov.io/gh/flask-dashboard/Flask-MonitoringDashboard/branch/master/graph/badge.svg)](https://codecov.io/gh/flask-dashboard/Flask-MonitoringDashboard)
5+
[![PyPI version](https://badge.fury.io/py/Flask-MonitoringDashboard.svg)](https://badge.fury.io/py/Flask-MonitoringDashboard)
6+
[![Py-version](https://img.shields.io/pypi/pyversions/flask_monitoringdashboard.svg)](https://img.shields.io/pypi/pyversions/flask_monitoringdashboard.svg)
7+
28
Dashboard for automatic monitoring of Flask web-services.
39

410
The Flask Monitoring Dashboard is an extension that offers four main functionalities with little effort from the Flask developer:
@@ -19,10 +25,6 @@ You can view the results by default using the default endpoint (this can be conf
1925

2026
For a more advanced documentation, take a look at the information on [this site](http://flask-monitoringdashboard.readthedocs.io/en/latest/functionality.html).
2127

22-
### Status
23-
[![Build Status](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)
24-
[![Documentation Status](https://readthedocs.org/projects/flask-monitoringdashboard/badge/?version=latest)](http://flask-monitoringdashboard.readthedocs.io/en/latest/?badge=latest)
25-
2628
## Installation
2729
To install from source, download the source code, then run this:
2830

TODO.rst

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,6 @@ Features to be implemented
2121
- Page '/result/<endpoint>/time_per_version' - Max 10 versions per plot.
2222
- Page '/result/<endpoint>/outliers' - Max 20 results per table.
2323

24-
[ ] Refactor all measurement-endpoints in a Blueprint
2524

2625
Work in progress
2726
----------------

docs/configuration.rst

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -62,8 +62,6 @@ the entry point of the app. The following things can be configured:
6262
OUTLIER_DETECTION_CONSTANT=2.5
6363
DASHBOARD_ENABLED = True
6464
TEST_DIR=/<path to your project>/tests/
65-
N=5
66-
SUBMIT_RESULTS_URL=http://0.0.0.0:5000/dashboard/submit-test-results
6765
COLORS={'main':[0,97,255], 'static':[255,153,0]}
6866
6967
This might look a bit overwhelming, but the following list explains everything in detail:
@@ -95,15 +93,7 @@ This might look a bit overwhelming, but the following list explains everything i
9593
the expected overhead is a bit larger, as you can find
9694
`here <https://github.com/flask-dashboard/Testing-Dashboard-Overhead>`_.
9795

98-
- **TEST_DIR**, **N**, **SUBMIT_RESULTS_URL:**
99-
To enable Travis to run your unit tests and send the results to the dashboard, you have to set those values:
100-
101-
- **TEST_DIR** specifies where the unit tests reside.
102-
103-
- **SUBMIT_RESULTS_URL** specifies where Travis should upload the test results to. When left out, the results will
104-
not be sent anywhere, but the performance collection process will still run.
105-
106-
- **N** specifies the number of times Travis should run each unit test.
96+
- **TEST_DIR:** Specifies where the unit tests reside. This will show up in the configuration in the Dashboard.
10797

10898
- **COLORS:** The endpoints are automatically hashed into a color.
10999
However, if you want to specify a different color for an endpoint, you can set this variable.

docs/functionality.rst

Lines changed: 9 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -93,38 +93,28 @@ Using the collected data, a number of observations can be made:
9393

9494
Test-Coverage Monitoring
9595
------------------------
96-
To enable Travis to run your unit tests and send the results to the dashboard, four steps have to be taken:
96+
To enable Travis to run your unit tests and send the results to the dashboard, three steps have to be taken:
9797

98-
1. Update the config file ('config.cfg') to include three additional values, `TEST_DIR`, `SUBMIT_RESULTS_URL` and `N`.
99-
100-
- **TEST_DIR** specifies where the unit tests reside.
101-
102-
- **SUBMIT_RESULTS_URL** specifies where Travis should upload the test results to. When left out, the results will
103-
not be sent anywhere, but the performance collection process will still run.
104-
105-
- **N** specifies the number of times Travis should run each unit test.
106-
107-
2. The installation requirement for the dashboard has to be added to the `setup.py` file of your app:
98+
1. The installation requirement for the dashboard has to be added to the `setup.py` file of your app:
10899

109100
.. code-block:: python
110101
111102
dependency_links=["https://github.com/flask-dashboard/Flask-MonitoringDashboard/tarball/master#egg=flask_monitoringdashboard"]
112103
113104
install_requires=('flask_monitoringdashboard')
114105
115-
3. In your `.travis.yml` file, three script commands should be added:
106+
2. In your `.travis.yml` file, one script command should be added:
116107

117108
.. code-block:: bash
118109
119-
export DASHBOARD_CONFIG=./config.cfg
120-
export DASHBOARD_LOG_DIR=./logs/
121-
python -m flask_monitoringdashboard.collect_performance
110+
python -m flask_monitoringdashboard.collect_performance --test_folder=./tests --times=5 --url=https://yourdomain.org/dashboard
122111
123-
The config environment variable specifies where the performance collection process can find the config file.
124-
The log directory environment variable specifies where the performance collection process should place the logs it uses.
125-
The third command will start the actual performance collection process.
112+
The `test_folder` argument specifies where the performance collection process can find the unit tests to use.
113+
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
114+
The `url` argument (optional) specifies where the dashboard is that needs to receive the performance results.
115+
When the last argument is omitted, the performance testing will run, but without publishing the results.
126116

127-
4. A method that is executed after every request should be added to the blueprint of your app.
117+
3. A method that is executed after every request should be added to the blueprint of your app.
128118
This is done by the dashboard automatically when the blueprint is passed to the binding function like so:
129119

130120
.. code-block:: python

flask_monitoringdashboard/__init__.py

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,10 @@
1313
"""
1414

1515
import os
16+
1617
from flask import Blueprint
17-
from flask_monitoringdashboard.config import Config
18+
19+
from flask_monitoringdashboard.core.config import Config
1820

1921
config = Config()
2022
user_app = None
@@ -49,22 +51,20 @@ def bind(app, blue_print=None):
4951
import os
5052
import datetime
5153
from flask import request
52-
log_dir = os.getenv('DASHBOARD_LOG_DIR')
5354

5455
@blue_print.after_request
5556
def after_request(response):
56-
if log_dir:
57-
t1 = str(datetime.datetime.now())
58-
log = open(log_dir + "endpoint_hits.log", "a")
59-
log.write("\"{}\",\"{}\"\n".format(t1, request.endpoint))
60-
log.close()
57+
hit_time_stamp = str(datetime.datetime.now())
58+
log = open("endpoint_hits.log", "a")
59+
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))
60+
log.close()
6161
return response
6262

6363
# Add all route-functions to the blueprint
64-
import flask_monitoringdashboard.routings
64+
import flask_monitoringdashboard.views
6565

6666
# Add wrappers to the endpoints that have to be monitored
67-
from flask_monitoringdashboard.measurement import init_measurement
67+
from flask_monitoringdashboard.core.measurement import init_measurement
6868
blueprint.before_app_first_request(init_measurement)
6969

7070
# register the blueprint to the app
Lines changed: 68 additions & 82 deletions
Original file line numberDiff line numberDiff line change
@@ -1,103 +1,89 @@
1-
import requests
2-
import configparser
3-
import time
4-
import datetime
5-
import os
6-
import sys
1+
import argparse
72
import csv
3+
import datetime
4+
import time
85
from unittest import TestLoader
96

10-
# Abort if config file is not specified.
11-
config = os.getenv('DASHBOARD_CONFIG')
12-
if config is None:
13-
print('You must specify a config file for the dashboard to be able to use the unit test monitoring functionality.')
14-
print('Please set an environment variable \'DASHBOARD_CONFIG\' specifying the absolute path to your config file.')
15-
sys.exit(0)
16-
17-
# Abort if log directory is not specified.
18-
log_dir = os.getenv('DASHBOARD_LOG_DIR')
19-
if log_dir is None:
20-
print('You must specify a log directory for the dashboard to be able to use the unit test monitoring '
21-
'functionality.')
22-
print('Please set an environment variable \'DASHBOARD_LOG_DIR\' specifying the absolute path where you want the '
23-
'log files to be placed.')
24-
sys.exit(0)
7+
import requests
258

26-
n = 1
27-
url = None
28-
sys.path.insert(0, os.getcwd())
29-
parser = configparser.RawConfigParser()
30-
try:
31-
parser.read(config)
32-
if parser.has_option('dashboard', 'N'):
33-
n = int(parser.get('dashboard', 'N'))
34-
if parser.has_option('dashboard', 'TEST_DIR'):
35-
test_dir = parser.get('dashboard', 'TEST_DIR')
36-
else:
37-
print('No test directory specified in your config file. Please do so.')
38-
sys.exit(0)
39-
if parser.has_option('dashboard', 'SUBMIT_RESULTS_URL'):
40-
url = parser.get('dashboard', 'SUBMIT_RESULTS_URL')
41-
else:
42-
print('No url specified in your config file for submitting test results. Please do so.')
43-
except configparser.Error as e:
44-
print("Something went wrong while parsing the configuration file:\n{}".format(e))
9+
# Parsing the arguments.
10+
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
11+
parser.add_argument('--test_folder', dest='test_folder', required=True,
12+
help='folder in which the unit tests can be found (example: ./tests)')
13+
parser.add_argument('--times', dest='times', default=5,
14+
help='number of times to execute every unit test (default: 5)')
15+
parser.add_argument('--url', dest='url', default=None,
16+
help='url of the Dashboard to submit the performance results to')
17+
args = parser.parse_args()
18+
print('Starting the collection of performance results with the following settings:')
19+
print(' - folder containing unit tests: ', args.test_folder)
20+
print(' - number of times to run tests: ', args.times)
21+
print(' - url to submit the results to: ', args.url)
22+
if not args.url:
23+
print('The performance results will not be submitted.')
4524

25+
# Initialize result dictionary and logs.
4626
data = {'test_runs': [], 'grouped_tests': []}
47-
log = open(log_dir + "endpoint_hits.log", "w")
48-
log.write("\"time\",\"endpoint\"\n")
27+
log = open('endpoint_hits.log', 'w')
28+
log.write('"time","endpoint"\n')
4929
log.close()
50-
log = open(log_dir + "test_runs.log", "w")
51-
log.write("\"start_time\",\"stop_time\",\"test_name\"\n")
52-
53-
if test_dir:
54-
suites = TestLoader().discover(test_dir, pattern="*test*.py")
55-
for i in range(n):
56-
for suite in suites:
57-
for case in suite:
58-
for test in case:
59-
result = None
60-
t1 = str(datetime.datetime.now())
61-
time1 = time.time()
62-
result = test.run(result)
63-
time2 = time.time()
64-
t2 = str(datetime.datetime.now())
65-
log.write("\"{}\",\"{}\",\"{}\"\n".format(t1, t2, str(test)))
66-
t = (time2 - time1) * 1000
67-
data['test_runs'].append({'name': str(test), 'exec_time': t, 'time': str(datetime.datetime.now()),
68-
'successful': result.wasSuccessful(), 'iter': i + 1})
30+
log = open('test_runs.log', 'w')
31+
log.write('"start_time","stop_time","test_name"\n')
6932

33+
# Find the tests and execute them the specified number of times.
34+
# Add the performance results to the result dictionary.
35+
suites = TestLoader().discover(args.test_folder, pattern="*test*.py")
36+
for iteration in range(args.times):
37+
for suite in suites:
38+
for case in suite:
39+
for test in case:
40+
test_result = None
41+
start_time_stamp = str(datetime.datetime.now())
42+
time_before = time.time()
43+
test_result = test.run(test_result)
44+
time_after = time.time()
45+
end_time_stamp = str(datetime.datetime.now())
46+
log.write('"{}","{}","{}"\n'.format(start_time_stamp, end_time_stamp, str(test)))
47+
execution_time = (time_after - time_before) * 1000
48+
data['test_runs'].append(
49+
{'name': str(test), 'exec_time': execution_time, 'time': str(datetime.datetime.now()),
50+
'successful': test_result.wasSuccessful(), 'iter': iteration + 1})
7051
log.close()
7152

72-
# Read and parse the log containing the test runs
73-
runs = []
74-
with open(log_dir + 'test_runs.log') as log:
53+
# Read and parse the log containing the test runs into an array for processing.
54+
test_runs = []
55+
with open('test_runs.log') as log:
7556
reader = csv.DictReader(log)
7657
for row in reader:
77-
runs.append([datetime.datetime.strptime(row["start_time"], "%Y-%m-%d %H:%M:%S.%f"),
78-
datetime.datetime.strptime(row["stop_time"], "%Y-%m-%d %H:%M:%S.%f"),
79-
row['test_name']])
58+
test_runs.append([datetime.datetime.strptime(row["start_time"], "%Y-%m-%d %H:%M:%S.%f"),
59+
datetime.datetime.strptime(row["stop_time"], "%Y-%m-%d %H:%M:%S.%f"),
60+
row['test_name']])
8061

81-
# Read and parse the log containing the endpoint hits
82-
hits = []
83-
with open(log_dir + 'endpoint_hits.log') as log:
62+
# Read and parse the log containing the endpoint hits into an array for processing.
63+
endpoint_hits = []
64+
with open('endpoint_hits.log') as log:
8465
reader = csv.DictReader(log)
8566
for row in reader:
86-
hits.append([datetime.datetime.strptime(row["time"], "%Y-%m-%d %H:%M:%S.%f"),
87-
row['endpoint']])
67+
endpoint_hits.append([datetime.datetime.strptime(row["time"], "%Y-%m-%d %H:%M:%S.%f"),
68+
row['endpoint']])
8869

89-
# Analyze logs to find out which endpoints are hit by which unit tests
90-
for h in hits:
91-
for r in runs:
92-
if r[0] <= h[0] <= r[1]:
93-
if {'endpoint': h[1], 'test_name': r[2]} not in data['grouped_tests']:
94-
data['grouped_tests'].append({'endpoint': h[1], 'test_name': r[2]})
70+
# Analyze the two arrays to find out which endpoints were hit by which unit tests.
71+
# Add the endpoint_name/test_name combination to the result dictionary.
72+
for endpoint_hit in endpoint_hits:
73+
for test_run in test_runs:
74+
if test_run[0] <= endpoint_hit[0] <= test_run[1]:
75+
if {'endpoint': endpoint_hit[1], 'test_name': test_run[2]} not in data['grouped_tests']:
76+
data['grouped_tests'].append({'endpoint': endpoint_hit[1], 'test_name': test_run[2]})
9577
break
9678

97-
# Try to send test results and endpoint-grouped unit tests to the flask_monitoringdashboard
98-
if url:
79+
# Send test results and endpoint_name/test_name combinations to the Dashboard if specified.
80+
if args.url:
81+
if args.url[-1] == '/':
82+
args.url += 'submit-test-results'
83+
else:
84+
args.url += '/submit-test-results'
9985
try:
100-
requests.post(url, json=data)
101-
print('Sent unit test results to the dashboard.')
86+
requests.post(args.url, json=data)
87+
print('Sent unit test results to the Dashboard at ', args.url)
10288
except Exception as e:
10389
print('Sending unit test results to the dashboard failed:\n{}'.format(e))
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
"""
2+
Core files for the Flask Monitoring Dashboard
3+
- auth.py handles authentication
4+
- forms.py are used for generating WTF_Forms
5+
- measurements.py contains a number of wrappers
6+
- outlier.py contains outlier information
7+
"""

0 commit comments

Comments
 (0)