What are good values for u and d in Silva and Almeida's backpropagation algorithm?
By : exezick
Date : March 29 2020, 07:55 AM
Hope this helps I have read that good "starting" values to fit most problems are to try u = 1.2 and d = 0.8 but i can't find the source right now. Edit: I found it, PDF page 1011

Backpropagation algorithm (Matlab): output values are saturating to 1
By : A. Brown
Date : March 29 2020, 07:55 AM
I wish did fix the issue. The sigmoid function is limited to the range (0,1) so it will never hit your target values (since they are all greater than 1). You should scale your target values so the are also in the range of the sigmoid. Since you know your target values are constrained to the range (0,100), just divide them all by 100.

AForge.NET  Backpropagation learning always returns values [1;1]
By : Saikrishna Mamidi
Date : March 29 2020, 07:55 AM
should help you out I did not work with AForge yet, but the BipolarSigmoidFunction is most probably tanh, i.e. the output is within [1, 1]. This is usually used for classification or sometimes for bounded regression. In your case you can either scale the data or use a linear activation function (e.g. identity, g(a) = a).

Finding parameters with backpropagation and gradient descent in PyTorch
By : user3583079
Date : March 29 2020, 07:55 AM
it fixes the issue I am experimenting with PyTorch and autodifferentiation and gradient descent code :
# your code as it is
import torch
import numpy as np
X = np.array([[3.], [4.], [5.]])
X = torch.from_numpy(X)
X.requires_grad = True
W = np.random.randn(3,3)
W = np.triu(W, k=0)
W = torch.from_numpy(W)
W.requires_grad = True
# define parameters for gradient descent
max_iter=100
lr_rate = 1e3
# we will do gradient descent for max_iter iteration, or convergence till the criteria is met.
i=0
out = compute_out(X,W)
while (i<max_iter) and (torch.abs(out)>0.01):
loss = (out0)**2
W = W  lr_rate*torch.autograd.grad(loss, W)[0]
i+=1
print(f"{i}: {out}")
out = compute_out(X,W)
print(W)

How do we calculate the new values for theta(weights?) in the output layer after backpropagation?
By : Aahlad Gogineni
Date : March 29 2020, 07:55 AM
like below fixes the issue I'm currently trying to catch up with Andrew Ng's machine learning course on Coursera and I'm having a little bit of trouble... , Delta at the output layer is

