You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 3, 2022. It is now read-only.
ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss_by_example in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss_by_example cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.framework.get_or_create_global_step in deprecated module tf.contrib. tf.contrib.framework.get_or_create_global_step cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
So for compatibility I manually replaced framework.get_or_create_global_step with tf.compat.v1.train.get_or_create_global_step, and also rnn.DropoutWrapper with tf.compat.v1.nn.rnn_cell.DropoutWrapper.
But I was unable to find a solution on how to handle the tf.contrib.legacy_seq2seq.sequence_loss_by_example method, since I cannot find a backwards compatible alternative. I tried installing Tensroflow Addons and use its seq2seq loss function, but wasn't able to figure out how to adapt it to work with the rest of the code.
Stumbled across some errors like Consider casting elements to a supported type. or Logits must be a [batch_size x sequence_length x logits] tensor, because probably i am not implementing something correctly.
My question: How to implement supported tensorflow v2 alternative of this loss function, so it acts similarly to the code below?
I am exploring the following tensorflow example: https://github.com/googledatalab/notebooks/blob/master/samples/TensorFlow/LSTM%20Punctuation%20Model%20With%20TensorFlow.ipynb which apparently is written in tf v1, so I upgraded with the v2 upgrade script and there were three main inconsistencies:
So for compatibility I manually replaced
framework.get_or_create_global_step
withtf.compat.v1.train.get_or_create_global_step
, and alsornn.DropoutWrapper
withtf.compat.v1.nn.rnn_cell.DropoutWrapper
.But I was unable to find a solution on how to handle the
tf.contrib.legacy_seq2seq.sequence_loss_by_example
method, since I cannot find a backwards compatible alternative. I tried installing Tensroflow Addons and use its seq2seq loss function, but wasn't able to figure out how to adapt it to work with the rest of the code.Stumbled across some errors like
Consider casting elements to a supported type.
orLogits must be a [batch_size x sequence_length x logits] tensor
, because probably i am not implementing something correctly.My question: How to implement supported tensorflow v2 alternative of this loss function, so it acts similarly to the code below?
Full code here.
My proposal: When this is resolved please update the notebook with newer version example.
The text was updated successfully, but these errors were encountered: