Tags
 IOS SQL HTML C RUBY-ON-RAILS MYSQL ASP.NET DEVELOPMENT RUBY .NET LINUX SQL-SERVER REGEX WINDOWS ALGORITHM ECLIPSE VISUAL-STUDIO STRING SVN PERFORMANCE APACHE-FLEX UNIT-TESTING SECURITY LINQ UNIX MATH EMAIL OOP LANGUAGE-AGNOSTIC VB6

# Right values for backpropagation parameters

By : Aonor
Date : October 14 2020, 02:15 PM
hop of those help? First of all, if this is your first network I would suggest you using Keras - it is easy to use and get the feeling of a network. When you have understood the principles of that you could use pyTorch/Tensorflow (i personally use PyTorch).
Now to your question: I assume you are implementing a standard neural network (that is not a CNN).
code :

Share :

## What are good values for u and d in Silva and Almeida's backpropagation algorithm?

By : exezick
Date : March 29 2020, 07:55 AM
Hope this helps I have read that good "starting" values to fit most problems are to try u = 1.2 and d = 0.8 but i can't find the source right now.
Edit: I found it, PDF page 10-11

## Backpropagation algorithm (Matlab): output values are saturating to 1

By : A. Brown
Date : March 29 2020, 07:55 AM
I wish did fix the issue. The sigmoid function is limited to the range (0,1) so it will never hit your target values (since they are all greater than 1). You should scale your target values so the are also in the range of the sigmoid. Since you know your target values are constrained to the range (0,100), just divide them all by 100.

## AForge.NET - Backpropagation learning always returns values [-1;1]

By : Saikrishna Mamidi
Date : March 29 2020, 07:55 AM
should help you out I did not work with AForge yet, but the BipolarSigmoidFunction is most probably tanh, i.e. the output is within [-1, 1]. This is usually used for classification or sometimes for bounded regression. In your case you can either scale the data or use a linear activation function (e.g. identity, g(a) = a).

## Finding parameters with backpropagation and gradient descent in PyTorch

By : user3583079
Date : March 29 2020, 07:55 AM
it fixes the issue I am experimenting with PyTorch and autodifferentiation and gradient descent
code :
``````# your code as it is
import torch
import numpy as np

X = np.array([[3.], [4.], [5.]])
X = torch.from_numpy(X)
W = np.random.randn(3,3)
W = np.triu(W, k=0)
W = torch.from_numpy(W)

# define parameters for gradient descent
max_iter=100
lr_rate = 1e-3

# we will do gradient descent for max_iter iteration, or convergence till the criteria is met.
i=0
out = compute_out(X,W)
while (i<max_iter) and (torch.abs(out)>0.01):
loss = (out-0)**2
i+=1
print(f"{i}: {out}")
out = compute_out(X,W)

print(W)
``````