it-swarm.com.de

Python: DeprecationWarning: elementwise == Vergleich fehlgeschlagen; Dies wird in Zukunft zu einem Fehler führen

Beim Ausprobieren der Tiefenlernaufgabe im udacity Kurs stieß ich auf ein Problem beim Vergleich der Vorhersagen meines Modells mit den Bezeichnungen des Trainingssatzes

from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range

pickle_file = 'notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)

Dies gibt die Ausgabe als:
Schulungsset (200000, 28, 28) (200000,)
Validierungssatz (10000, 28, 28) (10000,)
Testset (10000, 28, 28) (10000,) 

# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000

graph = tf.Graph()
with graph.as_default():

  # Input data.
  # Load the training, validation and test data into constants that are
  # attached to the graph.
  tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
  tf_train_labels = tf.constant(train_labels[:train_subset])
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  # Variables.
  # These are the parameters that we are going to be training. The weight
  # matrix will be initialized using random values following a (truncated)
  # normal distribution. The biases get initialized to zero.
  weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_labels]))
  biases = tf.Variable(tf.zeros([num_labels]))

  # Training computation.
  # We multiply the inputs with the weight matrix, and add biases. We compute
  # the softmax and cross-entropy (it's one operation in TensorFlow, because
  # it's very common, and it can be optimized). We take the average of this
  # cross-entropy across all training examples: that's our loss.
  logits = tf.matmul(tf_train_dataset, weights) + biases
  loss = tf.reduce_mean(
    tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))

  # Optimizer.
  # We are going to find the minimum of this loss using gradient descent.
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  # Predictions for the training, validation, and test data.
  # These are not part of training, but merely here so that we can report
  # accuracy figures as we train.
  train_prediction = tf.nn.softmax(logits)
  valid_prediction = tf.nn.softmax(
    tf.matmul(tf_valid_dataset, weights) + biases)
  test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)


num_steps = 801

def accuracy(predictions, labels):
    return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])


with tf.Session(graph=graph) as session:
  # This is a one-time operation which ensures the parameters get initialized as
  # we described in the graph: random weights for the matrix, zeros for the
  # biases. 
  tf.global_variables_initializer().run()
  print('Initialized')
  for step in range(num_steps):
    # Run the computations. We tell .run() that we want to run the optimizer,
    # and get the loss value and the training predictions returned as numpy
    # arrays.
    _, l, predictions = session.run([optimizer, loss, train_prediction])
    if (step % 100 == 0):
      print('Loss at step %d: %f' % (step, l))
      print('Training accuracy: %.1f%%' % accuracy(
        predictions, train_labels[:train_subset, :]))
      # Calling .eval() on valid_prediction is basically like calling run(), but
      # just to get that one numpy array. Note that it recomputes all its graph
      # dependencies.
      print('Validation accuracy: %.1f%%' % accuracy(
        valid_prediction.eval(), valid_labels))
  print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))

es gibt aus: 

C:\Users\Arslan\Anaconda3\lib\site-packages\ipykernel_launcher.py: 5: DeprecationWarning: elementwise == Vergleich fehlgeschlagen; Dies wird in Zukunft einen Fehler auslösen. "" " 

Die Genauigkeit wird für alle Datensätze mit 0% angegeben. Ich denke, wir können die Arrays nicht mit '==' vergleichen.
Jede Hilfe wäre dankbar

3
Arslan Thobani

Ich gehe davon aus, dass der Fehler in diesem Ausdruck auftritt:

np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))

können Sie uns etwas zu den 2 Arrays predictions, labels sagen? Das übliche Zeug - dtype, shape, einige Beispielwerte. Vielleicht gehen Sie noch einen Schritt weiter und zeigen für jeden die np.argmax(...).

In numpy können Sie Arrays derselben Größe vergleichen, es ist jedoch wählerischer geworden, Arrays zu vergleichen, die in der Größe nicht übereinstimmen:

In [522]: np.arange(10)==np.arange(5,15)
Out[522]: array([False, False, False, False, False, False, False, False, False, False], dtype=bool)
In [523]: np.arange(10)==np.arange(5,14)
/usr/local/bin/ipython3:1: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future.
  #!/usr/bin/python3
Out[523]: False
6
hpaulj

Ich habe dieses Problem gelöst, indem ich ein Upgrade von Python auf 3.6.4 (zuletzt) ​​durchgeführt habe.

conda update python
1