Skip to content

Value Error: Cannot create a tensor proto whose content is larger than 2GB. #28

@ByUnal

Description

@ByUnal

Hi there,

I was just trying re-run the code by given default paramaters and dataset, but word2vec model. I'm using word2vec-google-news-300 model therefore the embedding-dim in my case is 300. So the final command I run :

python3 train_harnn.py --epochs 5 --batch-size 2 --embedding-dim 300 --embedding-type 0

But It throws me the error below.

...
2024-04-19 11:49:46,589 - INFO - Loading data...
2024-04-19 11:49:46,589 - INFO - Data processing...
2024-04-19 11:49:46.734756: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-04-19 11:49:46.756399: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
Traceback (most recent call last):
  File "train_harnn.py", line 275, in <module>
    train_harnn()
  File "train_harnn.py", line 55, in train_harnn
    harnn = TextHARNN(
  File "/home/ybkaratas/Desktop/HieararchMC/Hierarchical-Multi-Label-Text-Classification/HARNN/text_harnn.py", line 165, in __init__
    self.embedding = tf.constant(pretrained_embedding, dtype=tf.float32, name="embedding")
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 162, in constant_v1
    return _constant_impl(value, dtype, shape, name, verify_shape=verify_shape,
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 277, in _constant_impl
    const_tensor = ops._create_graph_constant(  # pylint: disable=protected-access
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1008, in _create_graph_constant
    tensor_util.make_tensor_proto(
  File "/home/ybkaratas/miniconda3/envs/cht3.8/lib/python3.8/site-packages/tensorflow/python/framework/tensor_util.py", line 585, in make_tensor_proto
    raise ValueError(
ValueError: Cannot create a tensor proto whose content is larger than 2GB.

Initially I thougt it was due to insufficient memory, but I got the same error even when I reduced the both batch size and training data size. What could be the reason ? Can you help me ?

My hardware features: Nvidia GeForce 3060 12GB

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions