You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A sample script to process images usings pretrained models is [process_image.py](https://github.com/mlmed/torchxrayvision/blob/master/scripts/process_image.py)
70
+
A sample script to process images usings pretrained models is [process_image.py](https://github.com/mlmed/torchxrayvision/blob/main/scripts/process_image.py)
Specify weights for pretrained models (currently all DenseNet121)
98
98
Note: Each pretrained model has 18 outputs. The `all` model has every output trained. However, for the other weights some targets are not trained and will predict randomly becuase they do not exist in the training dataset. The only valid outputs are listed in the field `{dataset}.pathologies` on the dataset that corresponds to the weights.
[View docstrings for more detail on each dataset](https://github.com/mlmed/torchxrayvision/blob/master/torchxrayvision/datasets.py) and [Demo notebook](https://github.com/mlmed/torchxrayvision/blob/master/scripts/xray_datasets.ipynb) and [Example loading script](https://github.com/mlmed/torchxrayvision/blob/master/scripts/dataset_utils.py)
156
+
[View docstrings for more detail on each dataset](https://github.com/mlmed/torchxrayvision/blob/main/torchxrayvision/datasets.py) and [Demo notebook](https://github.com/mlmed/torchxrayvision/blob/main/scripts/xray_datasets.ipynb) and [Example loading script](https://github.com/mlmed/torchxrayvision/blob/main/scripts/dataset_utils.py)
it also works with data_augmentation if you pass in `data_aug=data_transforms` to the dataloader. The random seed is matched to align calls for the image and the mask.
## Distribution shift tools ([demo notebook](https://github.com/mlmed/torchxrayvision/blob/master/scripts/xray_datasets-CovariateShift.ipynb))
292
+
## Distribution shift tools ([demo notebook](https://github.com/mlmed/torchxrayvision/blob/main/scripts/xray_datasets-CovariateShift.ipynb))
293
293
294
294
The class `xrv.datasets.CovariateDataset` takes two datasets and two
295
295
arrays representing the labels. The samples will be returned with the
@@ -353,9 +353,9 @@ Medical Imaging with Deep Learning 2020 (Online: https://arxiv.org/abs/2002.0249
353
353
## Supporters/Sponsors
354
354
355
355
356
-
| <ahref="https://cifar.ca/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/master/docs/cifar-logo.png" /></a><br> CIFAR (Canadian Institute for Advanced Research) | <ahref="https://mila.quebec/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/master/docs/mila-logo.png" /></a><br> Mila, Quebec AI Institute, University of Montreal |
356
+
| <ahref="https://cifar.ca/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/main/docs/cifar-logo.png" /></a><br> CIFAR (Canadian Institute for Advanced Research) | <ahref="https://mila.quebec/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/main/docs/mila-logo.png" /></a><br> Mila, Quebec AI Institute, University of Montreal |
357
357
|:---:|:---:|
358
-
| <ahref="http://aimi.stanford.edu/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/master/docs/AIMI-stanford.jpg" /></a> <br><b>Stanford University's Center for <br>Artificial Intelligence in Medicine & Imaging</b> | <ahref="http://www.carestream.com/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/master/docs/carestream-logo.png" /></a> <br><b>Carestream Health</b> |
358
+
| <ahref="http://aimi.stanford.edu/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/main/docs/AIMI-stanford.jpg" /></a> <br><b>Stanford University's Center for <br>Artificial Intelligence in Medicine & Imaging</b> | <ahref="http://www.carestream.com/"><imgwidth="300px"src="https://raw.githubusercontent.com/mlmed/torchxrayvision/main/docs/carestream-logo.png" /></a> <br><b>Carestream Health</b> |
You can load pretrained anatomical segmentation models. `Demo Notebook <https://github.com/mlmed/torchxrayvision/blob/master/scripts/segmentation.ipynb>`_
30
+
You can load pretrained anatomical segmentation models. `Demo Notebook <https://github.com/mlmed/torchxrayvision/blob/main/scripts/segmentation.ipynb>`_
0 commit comments