Description
I'm running https://github.com/Azure/MachineLearningNotebooks/tree/824d844cd7386d95edfa6ecec1642e799ca79dd7/how-to-use-azureml/ml-frameworks/using-mlflow/train-and-deploy-keras-auto-logging on a default compute instance with "Python 3.8 AzureML" kernel. I'm using the 3.8 kernel due to #1421
The run = train.driver()
cell fails with:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-1f80e70d81d2> in <module>
----> 1 run = train.driver()
/mnt/batch/tasks/shared/LS_root/mounts/clusters/ci142270/code/how-to-use-azureml/ml-frameworks/using-mlflow/train-and-deploy-keras-auto-logging/scripts/train.py in driver()
52 model = Sequential()
53 # first hidden layer
---> 54 model.add(Dense(n_h1, activation='relu', input_shape=(n_inputs,)))
55 # second hidden layer
56 model.add(Dense(n_h2, activation='relu'))
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/engine/sequential.py in add(self, layer)
164 # and create the node connecting the current layer
165 # to the input layer we just created.
--> 166 layer(x)
167 set_inputs = True
168 else:
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in symbolic_fn_wrapper(*args, **kwargs)
73 if _SYMBOLIC_SCOPE.value:
74 with get_graph().as_default():
---> 75 return func(*args, **kwargs)
76 else:
77 return func(*args, **kwargs)
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, inputs, **kwargs)
444 # Raise exceptions in case the input is not compatible
445 # with the input_spec specified in the layer constructor.
--> 446 self.assert_input_compatibility(inputs)
447
448 # Collect input shapes to build layer.
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/engine/base_layer.py in assert_input_compatibility(self, inputs)
308 for x in inputs:
309 try:
--> 310 K.is_keras_tensor(x)
311 except ValueError:
312 raise ValueError('Layer ' + self.name + ' was called with '
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in is_keras_tensor(x)
693 ```
694 """
--> 695 if not is_tensor(x):
696 raise ValueError('Unexpectedly found an instance of type `' +
697 str(type(x)) + '`. '
/anaconda/envs/azureml_py38/lib/python3.8/site-packages/keras/backend/tensorflow_backend.py in is_tensor(x)
701
702 def is_tensor(x):
--> 703 return isinstance(x, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(x)
704
705
AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'
Further down the notebook assumes the existence of "gpu-cluster" however points to a inexistent "configuration.ipynb". I think it would be better to keep the notebooks self-contained.
The notebook says "If you are using a Notebook VM, you are all set" but this is not the case here.
Are there plans for having the notebooks from https://github.com/Azure/MachineLearningNotebooks automatically tested on the Azure compute instance rollouts, in order to improve the quality of the notebooks?