Skip to content
This repository was archived by the owner on Mar 24, 2021. It is now read-only.

Commit e3ed7ee

Browse files
committed
get size from Model
* update CI * allow GPU grows #11
1 parent 6654417 commit e3ed7ee

File tree

14 files changed

+129
-111
lines changed

14 files changed

+129
-111
lines changed

.codecov.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ coverage:
2121
patch:
2222
default:
2323
against: auto
24-
target: 40% # specify the target "X%" coverage to hit
24+
target: 25% # specify the target "X%" coverage to hit
2525
# threshold: 50% # allow this much decrease on patch
2626
changes: false
2727

.travis.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ env:
77
global:
88
- DISPLAY=""
99

10+
cache: pip
11+
1012
matrix:
1113
include:
1214
# - python: 2.7
@@ -23,6 +25,6 @@ install:
2325
- pip install tox codecov
2426
- pip list
2527

26-
script: tox
28+
script: tox --sitepackages
2729

2830
after_success: codecov

README.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -45,13 +45,11 @@ For more model and configuration please see [YOLO website](http://pjreddie.com/
4545
--path_weights ./model_data/yolo.h5 \
4646
--path_anchors ./model_data/yolo_anchors.csv \
4747
--path_classes ./model_data/coco_classes.txt \
48-
--model_image_size 608 608 \
4948
--path_output ./results \
5049
--path_image ./model_data/bike-car-dog.jpg \
5150
--path_video person.mp4
5251
```
5352
For Full YOLOv3, just do in a similar way, just specify model path and anchor path with `--path_weights <model_file>` and `--path_anchors <anchor_file>`.
54-
Note `model_image_size` is depending on used model, see width and height in model config `*.cfg`. Expected values are ` 608 608` for full YOLO and and `416 416` for Tiny YOLO.
5553
4. MultiGPU usage: use `--nb_gpu N` to use N GPUs. It is passed to the Keras [multi_gpu_model()](https://keras.io/utils/#multi_gpu_model).
5654

5755
---
@@ -93,7 +91,14 @@ If you want to use original pre-trained weights for YOLOv3:
9391
1. The test environment is Python 3.x ; Keras 2.2.0 ; tensorflow 1.14.0
9492
2. Default anchors are used. If you use your own anchors, probably some changes are needed.
9593
3. The inference result is not totally the same as Darknet but the difference is small.
96-
4. The loaded model takes whole GPU memory.
97-
5. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.
98-
6. The training strategy is for reference only. Adjust it according to your dataset and your goal. and add further strategy if needed.
99-
7. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this [post](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) for more information on bottleneck features.
94+
4. Always load pretrained weights and freeze layers in the first stage of training. Or try Darknet training. It's OK if there is a mismatch warning.
95+
5. The training strategy is for reference only. Adjust it according to your dataset and your goal. and add further strategy if needed.
96+
6. For speeding up the training process with frozen layers train_bottleneck.py can be used. It will compute the bottleneck features of the frozen model first and then only trains the last layers. This makes training on CPU possible in a reasonable time. See this [post](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html) for more information on bottleneck features.
97+
7. Failing while run multi-GPU training, think about porting to TF 2.0.
98+
99+
---
100+
101+
## Nice reading
102+
103+
- [Building efficient data pipelines using TensorFlow](https://towardsdatascience.com/building-efficient-data-pipelines-using-tensorflow-8f647f03b4ce)
104+
- [How to use half precision float16 when training on RTX cards with Tensorflow / Keras](https://medium.com/@noel_kennedy/how-to-use-half-precision-float16-when-training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4)

appveyor.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ environment:
1212
# a later point release.
1313
# See: http://www.appveyor.com/docs/installed-software#python
1414

15-
- PYTHON: "C:\\Python35-x64"
16-
PYTHON_VERSION: "3.5.x"
17-
PYTHON_ARCH: "64"
18-
TOXENV: "py35"
15+
# - PYTHON: "C:\\Python35-x64"
16+
# PYTHON_VERSION: "3.5.x"
17+
# PYTHON_ARCH: "64"
18+
# TOXENV: "py35"
1919

2020
- PYTHON: "C:\\Python36-x64"
2121
PYTHON_VERSION: "3.6.x"

circle.yml

Lines changed: 36 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,49 @@
11
version: 2.0
22

3+
references:
4+
5+
install_deps: &install_deps
6+
run:
7+
name: Install Dependences
8+
command: |
9+
sudo apt-get update -qq
10+
sudo apt-get install pkg-config python-dev python-tk
11+
# PyPI
12+
pip install -r requirements.txt --user
13+
python --version ; pip --version ; pip list
14+
315
jobs:
4-
Py35:
16+
17+
Py3-Tests:
518
docker:
6-
- image: circleci/python:3.5
7-
steps: &steps
19+
- image: circleci/python:3.6
20+
steps:
821
- checkout
9-
10-
- run:
11-
name: Install Packages
12-
command: |
13-
sudo apt-get update
14-
sudo apt-get install pkg-config python-dev python-tk
15-
16-
- run:
17-
name: Install PyPI dependences
18-
command: |
19-
pip install -r requirements.txt --user
20-
sudo pip install coverage pytest pytest-cov codecov
21-
python --version ; pip --version ; pip list
22+
- *install_deps
2223

2324
- run:
2425
name: Testing
2526
command: |
2627
export DISPLAY=""
27-
coverage run --source keras_yolo3 -m py.test keras_yolo3 scripts -v --doctest-modules --junitxml=test-reports/pytest_junit.xml
28+
sudo pip install coverage pytest pytest-cov codecov pytest-flake8
29+
coverage run --source keras_yolo3 -m py.test keras_yolo3 scripts -v --doctest-modules --junitxml=test-reports/pytest_junit.xml --flake8
2830
coverage report && coverage xml -o test-reports/coverage.xml
2931
codecov
3032
33+
- store_test_results:
34+
path: test-reports
35+
- store_artifacts:
36+
path: test-reports
37+
38+
Py3-Sample:
39+
docker:
40+
- image: circleci/python:3.6
41+
steps:
42+
- checkout
43+
- *install_deps
44+
3145
- run:
32-
name: Sample Detection
46+
name: Pre-trained Detection
3347
command: |
3448
export DISPLAY=""
3549
# download and conver weights
@@ -39,7 +53,7 @@ jobs:
3953
# download sample video
4054
wget -O ./results/volleyball.mp4 https://d2v9y0dukr6mq2.cloudfront.net/video/preview/UnK3Qzg/crowds-of-poeple-hot-summer-day-at-wasaga-beach-ontario-canada-during-heatwave_n2t3d8trl__SB_PM.mp4
4155
# run sample detections
42-
python ./scripts/detection.py -w ./model_data/tiny-yolo.h5 -a ./model_data/tiny-yolo_anchors.csv --model_image_size 416 416 -c ./model_data/coco_classes.txt -o ./results -i ./model_data/bike-car-dog.jpg -v ./results/volleyball.mp4
56+
python ./scripts/detection.py -w ./model_data/tiny-yolo.h5 -a ./model_data/tiny-yolo_anchors.csv -c ./model_data/coco_classes.txt -o ./results -i ./model_data/bike-car-dog.jpg -v ./results/volleyball.mp4
4357
ls -l results/*
4458
cat ./results/bike-car-dog.csv
4559
@@ -60,22 +74,12 @@ jobs:
6074
cat ./model_data/train_tiny-yolo_test.yaml
6175
python ./scripts/training.py --path_dataset ./model_data/VOC_2007_val.txt --path_weights ./model_data/tiny-yolo.h5 --path_anchors ./model_data/tiny-yolo_anchors.csv --path_classes ./model_data/voc_classes.txt --path_output ./model_data --path_config ./model_data/train_tiny-yolo_test.yaml
6276
# use the train model
63-
python ./scripts/detection.py -w ./model_data/tiny-yolo_weights_full.h5 -a ./model_data/tiny-yolo_anchors.csv --model_image_size 416 416 -c ./model_data/voc_classes.txt -o ./results -i ./model_data/bike-car-dog.jpg
77+
python ./scripts/detection.py -w ./model_data/tiny-yolo_weights_full.h5 -a ./model_data/tiny-yolo_anchors.csv -c ./model_data/voc_classes.txt -o ./results -i ./model_data/bike-car-dog.jpg
6478
ls -l results/*
6579
66-
- store_test_results:
67-
path: test-reports
68-
- store_artifacts:
69-
path: test-reports
70-
71-
Py36:
72-
docker:
73-
- image: circleci/python:3.6
74-
steps: *steps
75-
7680
workflows:
7781
version: 2
7882
build:
7983
jobs:
80-
- Py35
81-
- Py36
84+
- Py3-Tests
85+
- Py3-Sample

keras_yolo3/model.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -413,8 +413,8 @@ def create_model(input_shape, anchors, num_classes, weights_path=None, model_fac
413413
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
414414

415415
# K.clear_session() # get a new session
416-
image_input = Input(shape=(None, None, 3))
417-
h, w = input_shape
416+
cnn_h, cnn_w = input_shape
417+
image_input = Input(shape=(cnn_h, cnn_w, 3))
418418
num_anchors = len(anchors)
419419

420420
model_body = _FACTOR_YOLO_BODY[model_factor](image_input, num_anchors // model_factor, num_classes)
@@ -437,8 +437,8 @@ def create_model(input_shape, anchors, num_classes, weights_path=None, model_fac
437437

438438
model_loss_fn = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
439439
arguments=_LOSS_ARGUMENTS)
440-
y_true = [Input(shape=(h // {i: _INPUT_SHAPES[i] for i in range(model_factor)}[l],
441-
w // {i: _INPUT_SHAPES[i] for i in range(model_factor)}[l],
440+
y_true = [Input(shape=(cnn_h // {i: _INPUT_SHAPES[i] for i in range(model_factor)}[l],
441+
cnn_w // {i: _INPUT_SHAPES[i] for i in range(model_factor)}[l],
442442
num_anchors // model_factor,
443443
num_classes + 5))
444444
for l in range(model_factor)]
@@ -465,12 +465,12 @@ def create_model_bottleneck(input_shape, anchors, num_classes, freeze_body=2,
465465
weights_path=None, nb_gpu=1):
466466
"""create the training model"""
467467
# K.clear_session() # get a new session
468-
image_input = Input(shape=(None, None, 3))
469-
h, w = input_shape
468+
cnn_h, cnn_w = input_shape
469+
image_input = Input(shape=(cnn_w, cnn_h, 3))
470470
num_anchors = len(anchors)
471471

472-
y_true = [Input(shape=(h // {0: 32, 1: 16, 2: 8}[l],
473-
w // {0: 32, 1: 16, 2: 8}[l],
472+
y_true = [Input(shape=(cnn_h // {0: 32, 1: 16, 2: 8}[l],
473+
cnn_w // {0: 32, 1: 16, 2: 8}[l],
474474
num_anchors // 3,
475475
num_classes + 5))
476476
for l in range(3)]

keras_yolo3/yolo.py

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ class YOLO(object):
4949
"classes_path": os.path.join(update_path('model_data'), 'coco_classes.txt'),
5050
"score": 0.3,
5151
"iou": 0.45,
52-
"model_image_size": (416, 416),
52+
# "model_image_size": (416, 416),
5353
"nb_gpu": 1,
5454
}
5555

@@ -78,35 +78,35 @@ def __init__(self, weights_path, anchors_path, classes_path, model_image_size=(N
7878
self.classes_path = update_path(classes_path)
7979
self.score = score
8080
self.iou = iou
81-
self.model_image_size = model_image_size
81+
8282
self.nb_gpu = nb_gpu
8383
if not self.nb_gpu:
8484
# disable all GPUs
8585
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
86+
8687
self.class_names = get_class_names(self.classes_path)
8788
self.anchors = get_anchors(self.anchors_path)
8889
self._open_session()
89-
self.boxes, self.scores, self.classes = self._create_model()
90+
self.boxes, self.scores, self.classes = self._create_model(model_image_size)
91+
9092
self._generate_class_colors()
9193

9294
def _open_session(self):
93-
if K.backend() == 'tensorflow':
95+
if K.backend().lower() == 'tensorflow':
9496
import tensorflow as tf
95-
from keras.backend.tensorflow_backend import set_session
96-
9797
config = tf.ConfigProto(allow_soft_placement=True,
9898
log_device_placement=False)
9999
config.gpu_options.force_gpu_compatible = True
100-
# config.gpu_options.per_process_gpu_memory_fraction = 0.5
100+
# config.gpu_options.per_process_gpu_memory_fraction = 0.3
101101
# Don't pre-allocate memory; allocate as-needed
102102
config.gpu_options.allow_growth = True
103+
self.sess = tf.Session(config=config)
104+
K.tensorflow_backend.set_session(self.sess)
105+
else:
106+
logging.warning('Using %s backend.', K.backend())
107+
self.sess = K.get_session()
103108

104-
sess = tf.Session(config=config)
105-
set_session(sess)
106-
107-
self.sess = K.get_session()
108-
109-
def _create_model(self):
109+
def _create_model(self, model_image_size=(None, None)):
110110
# weights_path = update_path(self.weights_path)
111111
logging.debug('loading model from "%s"', self.weights_path)
112112
assert self.weights_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
@@ -119,12 +119,12 @@ def _create_model(self):
119119
except Exception:
120120
is_tiny_version = (num_anchors == 6) # default setting
121121
logging.exception('Loading weights from "%s"', self.weights_path)
122+
cnn_h, cnn_w = model_image_size
123+
input = Input(shape=(cnn_h, cnn_w, 3))
122124
if is_tiny_version:
123-
self.yolo_model = yolo_body_tiny(Input(shape=(None, None, 3)),
124-
num_anchors // 2, num_classes)
125+
self.yolo_model = yolo_body_tiny(input, num_anchors // 2, num_classes)
125126
else:
126-
self.yolo_model = yolo_body_full(Input(shape=(None, None, 3)),
127-
num_anchors // 3, num_classes)
127+
self.yolo_model = yolo_body_full(input, num_anchors // 3, num_classes)
128128
# make sure model, anchors and classes match
129129
self.yolo_model.load_weights(self.weights_path, by_name=True, skip_mismatch=True)
130130
else:
@@ -164,11 +164,13 @@ def _generate_class_colors(self):
164164

165165
def detect_image(self, image):
166166
start = time.time()
167+
# this should be taken from the model
168+
model_image_size = self.yolo_model._input_layers[0].input_shape[1:3]
167169

168-
if isinstance(self.model_image_size, (list, tuple, set)) and all(self.model_image_size):
169-
assert self.model_image_size[0] % 32 == 0, 'Multiples of 32 required'
170-
assert self.model_image_size[1] % 32 == 0, 'Multiples of 32 required'
171-
boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
170+
if all(model_image_size):
171+
for size in model_image_size:
172+
assert size % 32 == 0, 'Multiples of 32 required'
173+
boxed_image = letterbox_image(image, tuple(reversed(model_image_size)))
172174
else:
173175
new_image_size = (image.width - (image.width % 32),
174176
image.height - (image.height % 32))

model_data/train_yolo.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
image-size: [608, 608]
22
batch-size:
33
bottlenecks: 8
4-
head: 8
4+
head: 4
55
# the unfreeze model takes more memory
6-
full: 4
6+
full: 2
77
epochs:
88
bottlenecks: 25
99
head: 50

scripts/convert_weights.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -263,12 +263,14 @@ def _main(config_path, weights_path, output_path, weights_only, plot_model):
263263
cfg_parser.read_file(unique_config_file)
264264

265265
logging.info('Creating Keras model.')
266-
input_layer = Input(shape=(None, None, 3))
266+
cnn_w = int(cfg_parser['net_0']['width'])
267+
cnn_h = int(cfg_parser['net_0']['height'])
268+
input_layer = Input(shape=(cnn_h, cnn_w, 3))
267269
prev_layer = input_layer
268270
all_layers = []
269271

270-
weight_decay = float(cfg_parser['net_0']['decay']
271-
) if 'net_0' in cfg_parser.sections() else 5e-4
272+
weight_decay = float(cfg_parser['net_0']['decay']) \
273+
if 'net_0' in cfg_parser.sections() else 5e-4
272274
count = 0
273275
out_index = []
274276
for section in tqdm.tqdm(cfg_parser.sections()):

scripts/detect_interactive.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,12 +62,11 @@ def loop_detect_stream(yolo, path_output=None):
6262
predict_video(yolo, vid_path, path_output, show_stream=True)
6363

6464

65-
def _main(path_weights, path_anchors, model_image_size, path_classes, nb_gpu,
65+
def _main(path_weights, path_anchors, path_classes, nb_gpu,
6666
path_output=None, images=False, videos=False, stream=False):
6767
assert any([images, videos, stream]), 'nothing to do...'
6868

69-
yolo = YOLO(path_weights, path_anchors, path_classes, model_image_size,
70-
nb_gpu=nb_gpu)
69+
yolo = YOLO(path_weights, path_anchors, path_classes, nb_gpu=nb_gpu)
7170

7271
if images:
7372
# Image detection mode, disregard any remaining command line arguments

0 commit comments

Comments
 (0)