Briefly introduce the mutual conversion between tensorflow and pytorch (mainly tensorflow to pytorch)

This article takes a piece of code as an example to briefly introduce the mutual conversion between tensorflow and pytorch (mainly tensorflow to pytorch). The introduction may not be so detailed, it is for reference only.

Because I am only familiar with pytorch, but I don’t know much about tensorflow, and the code often encounters tensorflow, and I want to use pytorch, so I will briefly introduce tensorflow to pytorch, there may be many mistakes, I hope to lightly spray~

Table of Contents

    • 1. Variable predefinition
    • 2. Create and initialize variables
    • 3. Statement execution
    • 4. tensor
    • 5. Other functions

1. Predefined variables

In the world of TensorFlow, variable definition and initialization are separated.
In tensorflow, variables are generally predefined at the beginning, declaring its data type, shape, etc., and then assigning specific values ​​​​at the time of execution, as shown in the figure below, and pytorch will only be defined when it is used. Definitions and variables Initialization is lumped together.

2. Create variables and initialize

Tensorflow uses tf.Variable to create and initialize variables, while pytorch uses torch.tensor to create and initialize variables, as shown in the figure below.

3. Statement execution

In the world of TensorFlow, the definition and initialization of variables are separated, and all the assignment and calculation of graph variables must be performed through the run of tf.Session.

sess.run([G_solver, G_loss_temp, MSE_loss],
             feed_dict = {X: X_mb, M: M_mb, H: H_mb})

In pytorch, it does not need to be run, it can be calculated directly after assignment.

4.tensor

The numpy array to be created during pytorch operation is converted into tensor, as follows:

if use_gpu is True:
X_mb = torch.tensor(X_mb, device="cuda")
M_mb = torch.tensor(M_mb, device="cuda")
H_mb = torch.tensor(H_mb, device="cuda")
else:
X_mb = torch.tensor(X_mb)
M_mb = torch.tensor(M_mb)
H_mb = torch.tensor(H_mb)

The tensor data type needs to be converted back to numpy array& at the end of the run#xff1a;

if use_gpu is True:
imputed_data = imputed_data.cpu().detach().numpy()
else:
imputed_data=imputed_data.detach().numpy()

This operation is not required in tensorflow.

5. Other functions

There are many functions in tensorflow that are not in pytorch, but they can be found in other libraries, as shown in the following table.

function in tensorflow corresponding function in pytorch parameter difference
tf.sqrt torch.sqrt Exactly the same
tf.random_normal np.random.normal(numpy) tf.random_normal(shape = size, stddev = xavier_stddev)
np.random.normal(size = size, scale = xavier_stddev)
tf.concat torch.cat inputs = tf.concat(values ​​= [x, m], axis = 1)
inputs = torch.cat(dim=1, tensors=[x, m])
tf.nn.relu F. relu(torch.nn.functional) Exactly the same
tf.nn.sigmoid torch.sigmoid(torch) Exactly the same
tf.matmul torch.matmul(torch) Exactly the same
tf.reduce_mean torch.mean(torch) Exactly the same
tf.log torch.log(torch) exactly the same
tf.zeros torch.zeros Exactly the same
tf.train.AdamOptimizer torch.optim.Adam(torch) optimizer_D = tf.train.AdamOptimizer().minimize(D_loss, var_list=theta_D)
optimizer_D = torch.optim.Adam(params=theta_D)

[Description]: The introduction in this article is for reference only. Please refer to relevant information for actual conversion. If you have the ability, it is recommended to master both deep learning frameworks~

Reference: 1. https://blog.csdn.net/dou3516/article/details/109181203

Leave a Reply

Your email address will not be published. Required fields are marked *