it-swarm.com.de

Tensorflow OOM auf GPU

ich trainiere einige Musikdaten auf einem LSTM-RNN in Tensorflow und bin auf ein Problem mit der GPU-Speicherzuordnung gestoßen, das ich nicht verstehe: Ich stoße auf ein OOM, wenn tatsächlich gerade genug VRAM vorhanden ist. Einige Hintergrundinformationen: Ich arbeite an Ubuntu Gnome 16.04 mit einem GTX1060 6 GB, Intel Xeon E3-1231V3 und 8 GB RAM. .__ Nun also zuerst den Teil der Fehlermeldung, den ich verstehen kann, in der und ich werde am Ende die ganze Fehlermeldung noch einmal hinzufügen, damit jeder, der darum bittet, helfen könnte: 

Ich tensorflow/core/common_runtime/bfc_allocator.cc: 696] 8 Chunks von Größe 256 insgesamt 2.0KiB I tensorflow/core/common_runtime/bfc_allocator.cc: 696] 1 Chunks der Größe 1280 insgesamt 1,2 kB i tensorflow/core/common_runtime/bfc_allocator.cc: 696] 5 Chunks der Größe 44288 insgesamt 216.2KiB I tensorflow/core/common_runtime/bfc_allocator.cc: 696] 5 Chunks der Größe 56064 mit insgesamt 273,8 KB I tensorflow/core/common_runtime/bfc_allocator.cc: 696] 4 Chunks der Größe 154350080 von insgesamt 588,80 MB I tensorflow/core/common_runtime/bfc_allocator.cc: 696] 3 Chunks der Größe 813400064 von insgesamt 2,27GiB I tensorflow/core/common_runtime/bfc_allocator.cc: 696] 1 Chunks der Größe 1612612352 mit insgesamt 1,50 GBi tensorflow/core/common_runtime/bfc_allocator.cc: 700] Summe von In-Use-Brocken: 4.35GiB I tensorflow/core/common_runtime/bfc_allocator.cc: 702] Stats: 

Limit: 5484118016 

InUse: 4670717952 

MaxInUse: 5484118016 

NumAllocs: 29 

MaxAllocSize: 1612612352

W tensorflow/core/common_runtime/bfc_allocator.cc: 274] ********************* ___________ * __ ***************************** ************************ xxxxxxxxxxxxxx W tensorflow/core/common_runtime/bfc_allocator.cc: 275] Kein Zugriff auf Speicher, der versucht, 775,72 MB zu reservieren. Siehe Protokolle zum Speicherstatus. W tensorflow/core/framework/op_kernel.cc: 993] Ressource erschöpft: OOM bei der Zuordnung von Tensor mit Form [14525,14000]

Ich kann also lesen, dass maximal 5484118016 Bytes zugeteilt werden müssen, 4670717952 Bytes sind bereits belegt und weitere 777.72 MB = 775720000 Bytes sind zuzuordnen. 5484118016 Bytes - 4670717952 Bytes - 775720000 Bytes = 37680064 Bytes laut meinem Rechner. Es sollte also immer noch 37 MB freier VRAM vorhanden sein, nachdem der Platz für den neuen Tensor zugewiesen wurde, den er dort hineinschieben möchte. Dies scheint mir auch recht legitim zu sein, da Tensorflow wahrscheinlich (ich denke schon?) Nicht versuchen würde, mehr VRAM zuzuordnen, als noch verfügbar ist, und den Rest der Daten in RAM oder etwas anderes ablegen. 

Jetzt denke ich, es gibt nur einen großen Fehler in meinem Denken, aber ich wäre sehr dankbar, wenn mir jemand erklären könnte, was dieser Fehler ist. Die offensichtliche Lösungsstrategie für mein Problem besteht darin, meine Stapel etwas kleiner zu machen, da sie jeweils bei etwa 1,5 GB liegen, ist sie wahrscheinlich einfach zu groß. Trotzdem würde ich gerne wissen, was das eigentliche Problem ist. 

edit: Ich habe etwas gefunden, was ich versuchen sollte: 

config = tf.ConfigProto()
config.gpu_options.allocator_type = 'BFC'
with tf.Session(config = config) as s:

was immer noch nicht funktioniert, aber da die Tensorflow-Dokumentation keine Erklärung dafür hat, was 

 gpu_options.allocator_type = 'BFC'

würde ich gerne euch fragen.

Fügen Sie den Rest der Fehlermeldung für alle Interessenten hinzu: 

Entschuldigung für das lange Kopieren/Einfügen, aber vielleicht würde/müsste jemand es sehen wollen, 

Vielen Dank im Voraus, Leon

(gputensorflow) [email protected]:~/Tensorflow$ python Netzwerk_v0.5.1_gamma.py 
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:910] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GTX 1060 6GB
major: 6 minor: 1 memoryClockRate (GHz) 1.7335
pciBusID 0000:01:00.0
Total memory: 5.93GiB
Free memory: 5.40GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (256):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (512):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (1024):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (2048):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (4096):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (8192):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (16384):     Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (32768):     Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (65536):     Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (131072):    Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (262144):    Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (524288):    Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (1048576):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (2097152):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (4194304):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (8388608):   Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (16777216):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (33554432):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (67108864):  Total Chunks: 0, Chunks in use: 0 0B allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (134217728):     Total Chunks: 1, Chunks in use: 0 147.20MiB allocated for chunks. 147.20MiB client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin (268435456):     Total Chunks: 1, Chunks in use: 0 628.52MiB allocated for chunks. 0B client-requested for chunks. 0B in use in bin. 0B client-requested in use in bin.
I tensorflow/core/common_runtime/bfc_allocator.cc:660] Bin for 775.72MiB was 256.00MiB, Chunk State: 
I tensorflow/core/common_runtime/bfc_allocator.cc:666]   Size: 628.52MiB | Requested Size: 0B | in_use: 0, prev:   Size: 147.20MiB | Requested Size: 147.20MiB | in_use: 1, next:   Size: 54.8KiB | Requested Size: 54.7KiB | in_use: 1
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208000000 of size 1280
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208000500 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208000600 of size 56064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020800e100 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1020800e200 of size 44288
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208018f00 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208019000 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10208019100 of size 813400064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102387d1100 of size 56064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102387dec00 of size 154350080
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10241b11e00 of size 44288
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10241b1cb00 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10241b1cc00 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x10241b1cd00 of size 154350080
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102722d4d00 of size 56064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1027b615a00 of size 44288
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1027b620700 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1027b620800 of size 256
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x1027b620900 of size 813400064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102abdd8900 of size 813400064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102dc590900 of size 56064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102dc59e400 of size 56064
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102dc5abf00 of size 154350080
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102e58df100 of size 154350080
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102eec12300 of size 44288
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102eec1d000 of size 44288
I tensorflow/core/common_runtime/bfc_allocator.cc:678] Chunk at 0x102eec27d00 of size 1612612352
I tensorflow/core/common_runtime/bfc_allocator.cc:687] Free at 0x1024ae4ff00 of size 659049984
I tensorflow/core/common_runtime/bfc_allocator.cc:687] Free at 0x102722e2800 of size 154350080
I tensorflow/core/common_runtime/bfc_allocator.cc:693]      Summary of in-use Chunks by size: 
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 8 Chunks of size 256 totalling 2.0KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 1280 totalling 1.2KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 5 Chunks of size 44288 totalling 216.2KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 5 Chunks of size 56064 totalling 273.8KiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 4 Chunks of size 154350080 totalling 588.80MiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 3 Chunks of size 813400064 totalling 2.27GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:696] 1 Chunks of size 1612612352 totalling 1.50GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:700] Sum Total of in-use chunks: 4.35GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats: 
Limit:                  5484118016
InUse:                  4670717952
MaxInUse:               5484118016
NumAllocs:                      29
MaxAllocSize:           1612612352

W tensorflow/core/common_runtime/bfc_allocator.cc:274] *********************___________*__***************************************************xxxxxxxxxxxxxx
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 775.72MiB.  See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:993] Resource exhausted: OOM when allocating tensor with shape[14525,14000]
Traceback (most recent call last):
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1022, in _do_call
    return fn(*args)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1004, in _run_fn
    status, run_metadata)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/contextlib.py", line 66, in __exit__
    next(self.gen)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[14525,14000]
     [[Node: rnn/basic_lstm_cell/weights/Initializer/random_uniform = Add[T=DT_FLOAT, _class=["loc:@rnn/basic_lstm_cell/weights"], _device="/job:localhost/replica:0/task:0/gpu:0"](rnn/basic_lstm_cell/weights/Initializer/random_uniform/mul, rnn/basic_lstm_cell/weights/Initializer/random_uniform/min)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "Netzwerk_v0.5.1_gamma.py", line 171, in <module>
    session.run(tf.global_variables_initializer())
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[14525,14000]
     [[Node: rnn/basic_lstm_cell/weights/Initializer/random_uniform = Add[T=DT_FLOAT, _class=["loc:@rnn/basic_lstm_cell/weights"], _device="/job:localhost/replica:0/task:0/gpu:0"](rnn/basic_lstm_cell/weights/Initializer/random_uniform/mul, rnn/basic_lstm_cell/weights/Initializer/random_uniform/min)]]

Caused by op 'rnn/basic_lstm_cell/weights/Initializer/random_uniform', defined at:
  File "Netzwerk_v0.5.1_gamma.py", line 94, in <module>
    initial_state=initial_state, time_major=False)       # time_major = FALSE currently
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 545, in dynamic_rnn
    dtype=dtype)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 712, in _dynamic_rnn_loop
    swap_memory=swap_memory)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2626, in while_loop
    result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2459, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2409, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 697, in _time_step
    (output, new_state) = call_cell()
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/rnn.py", line 683, in <lambda>
    call_cell = lambda: cell(input_t, state)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 179, in __call__
    concat = _linear([inputs, h], 4 * self._num_units, True, scope=scope)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/contrib/rnn/python/ops/core_rnn_cell_impl.py", line 747, in _linear
    "weights", [total_arg_size, output_size], dtype=dtype)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
    custom_getter=custom_getter)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
    custom_getter=custom_getter)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 348, in get_variable
    validate_shape=validate_shape)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
    caching_device=caching_device, validate_shape=validate_shape)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 684, in _get_single_variable
    validate_shape=validate_shape)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 226, in __init__
    expected_shape=expected_shape)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variables.py", line 303, in _init_from_args
    initial_value(), name="initial_value", dtype=dtype)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/variable_scope.py", line 673, in <lambda>
    shape.as_list(), dtype=dtype, partition_info=partition_info)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/init_ops.py", line 360, in __call__
    dtype, seed=self.seed)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/random_ops.py", line 246, in random_uniform
    return math_ops.add(rnd * (maxval - minval), minval, name=name)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 73, in add
    result = _op_def_lib.apply_op("Add", x=x, y=y, name=name)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/leon/anaconda3/envs/gputensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
    self._traceback = _extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[14525,14000]
     [[Node: rnn/basic_lstm_cell/weights/Initializer/random_uniform = Add[T=DT_FLOAT, _class=["loc:@rnn/basic_lstm_cell/weights"], _device="/job:localhost/replica:0/task:0/gpu:0"](rnn/basic_lstm_cell/weights/Initializer/random_uniform/mul, rnn/basic_lstm_cell/weights/Initializer/random_uniform/min)]]
12
LJKS

Versuchen Sie, sich das anzuschauen

Achten Sie darauf, dass die Evaluierungs- und Trainings-Binärdatei nicht auf demselben .__ ausgeführt wird. GPU oder anderenfalls ist der Speicher erschöpft. Betrachten Sie die Ausführung der Bewertung auf einer separaten GPU, falls verfügbar, oder Unterbrechen des Trainings binär, während die Auswertung auf derselben GPU ausgeführt wird.

https://www.tensorflow.org/tutorials/deep_cnn

3
jeck yung

Ich behebe dieses Problem, indem ich batch_size=52.__ reduziere. Um den Speicherbedarf zu reduzieren, ist batch_size zu reduzieren. 

Batch_size hängt von Ihrer GPU-Grafikkarte, der Größe des VRAM, dem Cache-Speicher usw. ab.

Bitte bevorzugen Sie diese Ein anderer Stack Overflow Link

1
susan097

Ich bin auf das gleiche Problem gestoßen. Ich habe alle Anaconda Prompt-Fenster geschlossen und alle Python-Aufgaben gelöscht. Wieder ein Anaconda Prompt-Fenster geöffnet und die Datei train.py ausgeführt. Beim nächsten Mal hat es für mich funktioniert. Die Terminals von Anaconda und Python belegen das Gedächtnis, das keinen Platz für den Trainingsprozess lässt. 

Versuchen Sie auch, die Stapelgröße des Trainingsprozesses zu reduzieren, wenn der oben genannte Ansatz nicht funktioniert. 

Hoffe das hilft ????

1
Sriram Veturi

Ich hatte kürzlich einen sehr ähnlichen Fehler, der darauf zurückzuführen war, dass versehentlich ein Trainingsprozess im Hintergrund ausgeführt wurde, während versucht wurde, in einem anderen Prozess zu trainieren. Das Stoppen behebt den Fehler sofort.

0
Ranga

Wenn ich OOM auf der GPU begegne, glaube ich, dass das Ändern von batch size die richtige Option ist, um es zuerst zu versuchen. 

Für andere GPU benötigen Sie möglicherweise eine andere Batchgröße basierend auf der GPU Erinnerung haben Sie. 

Neulich war ich mit dem ähnlichen Problem konfrontiert und habe viele Änderungen vorgenommen, um die verschiedenen Experimente durchzuführen.

Hier ist der Link zur Frage (auch einige Tricks sind enthalten).

Wenn Sie jedoch die Größe der Partie reduzieren, wird das Training möglicherweise langsamer. Wenn Sie mehrere GPUs haben, können Sie diese verwenden. Um Ihre GPU zu überprüfen, können Sie auf dem Terminal schreiben,

nvidia-smi

Es zeigt Ihnen notwendige Informationen zu Ihrem GPU-Rack.

0
Maruf

Hatte das gleiche OOM-Problem beim Ausführen von Modellpermutationen nacheinander. Nach der Fertigstellung eines Modells, dem Definieren und Ausführen eines neuen Modells scheint der GPU-Speicher NICHT vollständig von den vorherigen Modellen gelöscht zu sein, und es wird etwas im Speicher aufgebaut, was zu einem möglichen OOM-Fehler führt.

Antwort von g-eoj auf ein anderes Problem:

keras.backend.clear_session()

sollte das vorherige Modell löschen. From https://keras.io/backend/ Zerstört das aktuelle TF-Diagramm und erstellt ein neues. Nützlich, um Unordnung von alten Modellen/Schichten zu vermeiden. Löschen Sie nach dem Ausführen und Speichern eines Modells die Sitzung und führen Sie dann das nächste Modell aus.

0
MarkD