logo
down
shadow

Online or batch training by default in tensorflow


Online or batch training by default in tensorflow

By : ERL
Date : November 22 2020, 03:01 PM
help you fix your problem There are mainly 3 Types of Gradient Descent. Specifically,
Stochastic Gradient Descent Batch Gradient Descent Mini Batch Gradient Descent
code :
N_EPOCHS = #Need to define here
BATCH_SIZE = # Need to define hare

with tf.Session() as sess:
   train_count = len(train_x)    

    for i in range(1, N_EPOCHS + 1):
        for start, end in zip(range(0, train_count, BATCH_SIZE),
                              range(BATCH_SIZE, train_count + 1,BATCH_SIZE)):

            sess.run(train_op, feed_dict={X: train_x[start:end],
                                           Y: train_y[start:end]})


Share : facebook icon twitter icon
LSTM with Keras for mini-batch training and online testing

LSTM with Keras for mini-batch training and online testing


By : Don
Date : March 29 2020, 07:55 AM
I think the issue was by ths following , If I understand you correctly you are asking if you can enable statefulness after training. This should be possible, yes. For example:
code :
net = Dense(1)(SimpleRNN(stateful=False)(input))
model = Model(input=input, output=net)

model.fit(...)

w = model.get_weights()
net = Dense(1)(SimpleRNN(stateful=True)(input))
model = Model(input=input, output=net)
model.set_weights(w)
TensorFlow: does tf.train.batch automatically load the next batch when the batch has finished training?

TensorFlow: does tf.train.batch automatically load the next batch when the batch has finished training?


By : Mike Smart
Date : March 29 2020, 07:55 AM
I wish this help you
... does tf.train.batch automatically feeds in another batch of data to the session?
code :
queue = tf.train.string_input_producer(filenames,
        num_epochs=1) # only iterate through all samples in dataset once

reader = tf.TFRecordReader() # or any reader you need
_, example = reader.read(queue)

image, label = your_conversion_fn(example)

# batch will now load up to 100 image-label-pairs on sess.run(...)
# most tf ops are tuned to work on batches
# this is faster and also gives better result on e.g. gradient calculation
batch = tf.train.batch([image, label], batch_size=100)

with tf.Session() as sess:
    # "boilerplate" code
    sess.run([
        tf.local_variables_initializer(),
        tf.global_variables_initializer(),
    ])
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(sess=sess, coord=coord)

    try:
        # in most cases coord.should_stop() will return True
        # when there are no more samples to read
        # if num_epochs=0 then it will run for ever
        while not coord.should_stop():
            # will start reading, working data from input queue
            # and "fetch" the results of the computation graph
            # into raw_images and raw_labels
            raw_images, raw_labels = sess.run([images, labels])
    finally:
        coord.request_stop()
        coord.join(threads)
Mini-batch training in Tensorflow when using FIFOQueue

Mini-batch training in Tensorflow when using FIFOQueue


By : Bu Ali
Date : March 29 2020, 07:55 AM
it helps some times I'am training a linear regression problem using tf.train.GradientDescentOptimizer() in Tensorflow. In general, I can use placeholders and feed_dict={} to input a batch of samples everytime and train the weight W. However, I would like to use tf.FIFOQueue instead of feed_dict. For example, in the following code, I input X and Y and train weight W: , The code using tf.data (with comments):
code :
import tensorflow as tf

v_dimen = 300
n_samples = 10000
batch_size = 32
X = tf.random_normal([n_samples, v_dimen], mean=0, stddev=1)
Y = tf.random_normal([n_samples, 1], mean=0, stddev=1)

# X and Y are fixed once having created.
dataset = tf.data.Dataset.from_tensor_slices((X, Y))
# dataset = dataset.shuffle(n_samples)  # shuffle
dataset = dataset.repeat()  # will raise OutOfRangeError if not repeat
dataset = dataset.batch(batch_size)  # specify batch_size
iterator = dataset.make_initializable_iterator()
X_batch, Y_batch = iterator.get_next()  # like dequeue.

W = tf.Variable(tf.random.truncated_normal((v_dimen, 1), mean=0.0, stddev=0.001))
predicted_Y = tf.matmul(X_batch, W)  # some function on X, like tf.matmul(X_batch,W)
loss = tf.nn.l2_loss(Y_batch - predicted_Y)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss, var_list=[W])
init = [tf.global_variables_initializer(), iterator.initializer]  # iterator.initializer should be initialized.

with tf.Session() as sess:
    sess.run(init)
    for i in range(1000):
        _, x, y = sess.run([optimizer, X_batch, Y_batch])
        print(i, x.shape, y.shape, y[0])  # y[0] will be repeated after 10000 / 32 = 625 iterations. 
import tensorflow as tf

v_dimen = 300
n_samples = 100  # you don't enqueue too many elements each time.
batch_size = 32
X = tf.random_normal([n_samples, v_dimen], mean=0, stddev=1) 
Y = tf.random_normal([n_samples, 1], mean=0, stddev=1) 
# each time X and Y will be re-created when being demanded to enqueue.

# The capacity of queue is not the same as the batch size, it is just for the queue. 
# It is the upper bound on the number of elements that may be stored in this queue.
# When you want to use `dequeue_many`, which allows to specify the batch size, the `shapes` is also important.
# Because `dequeue_many` slices each component tensor along the 0th dimension to make multiple elements as output. 
# For the same reason, `enqueue_many` should be used.
# see more in the documentation of `FIFOQueue`, `enqueue_many` and `dequeue_many`.
q_in = tf.FIFOQueue(capacity=50, dtypes=tf.float32, shapes=[v_dimen]) 
enqueue_op = q_in.enqueue_many(X)
numberOfThreads = 1
qr = tf.train.QueueRunner(q_in, [enqueue_op] * numberOfThreads)
tf.train.add_queue_runner(qr)
X_batch = q_in.dequeue_many(batch_size)

q_out = tf.FIFOQueue(capacity=50, dtypes=tf.float32, shapes=[1])
enqueue_op = q_out.enqueue_many(Y)
numberOfThreads = 1
qr = tf.train.QueueRunner(q_out, [enqueue_op] * numberOfThreads)
tf.train.add_queue_runner(qr)
Y_batch = q_out.dequeue_many(batch_size)

W = tf.Variable(tf.random.truncated_normal((v_dimen, 1), mean=0.0,stddev=0.001))
predicted_Y = tf.matmul(X_batch,W) # some function on X, like tf.matmul(X_batch,W)
loss = tf.nn.l2_loss(Y_batch - predicted_Y)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss, var_list=[W])
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)
    for i in range(1000):
        sess.run([optimizer])

    coord.request_stop()
    coord.join(threads)
I use batch = 5 in tensorflow during training phase, why I cant use only batch = 1 test in tensorflowjs?

I use batch = 5 in tensorflow during training phase, why I cant use only batch = 1 test in tensorflowjs?


By : Karen Gabrielyan
Date : March 29 2020, 07:55 AM
hope this fix your issue It sounds like you set the batch size explicitly as part of the input shape in your training job, e.g.
code :
x = tf.placeholder("float", shape=[5, 512, 512, 12])
x = tf.placeholder("float", shape=[None, 512, 512, 12])
How to limit RAM usage while batch training in tensorflow?

How to limit RAM usage while batch training in tensorflow?


By : Lavon Long Baird
Date : March 29 2020, 07:55 AM
Hope that helps Loading a big dataset in memory is not a good idea. I suggest you to use something different for loading the datasets, take a look to the dataset API in TensorFlow: https://www.tensorflow.org/programmers_guide/datasets
You might need to convert your data into other format, but if you have a CSV or TXT file with a example per line you can use TextLineDataset and feed the model with it:
code :
filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.TextLineDataset(filenames)

def _parse_py_fun(text_line):
    ... your custom code here, return np arrays

def _map_fun(text_line):
    result = tf.py_func(_parse_py_fun, [text_line], [tf.uint8])
    ... other tensorlow code here
    return result

dataset = dataset.map(_map_fun)
dataset = dataset.batch(4)
iterator = dataset.make_one_shot_iterator()
input_data_of_your_model = iterator.get_next()

output = build_model_fn(input_data_of_your_model)

sess.run([output]) # the input was assigned directly when creating the model
Related Posts Related Posts :
  • How to use an API that requires user's entry (Sentiment Analysis)
  • Django first app
  • Why is this regex code not working
  • Beautifulsoup - findAll not finding string when link is also in container
  • Python: any() to check if attribute in List of Objects matches a list
  • How do I "enrich" every record in a Pandas dataframe with an hour column?
  • Failing to open an Excel file with Python
  • Python function to modify string
  • Pandas DataFrame seems not to have "factorize" method
  • Row column operations in CSV
  • How to decrypt RSA encrypted file (via PHP and OpenSSL) with pyopenssl?
  • How can we use pandas to generate min, max, mean, median, ...as new columns for the dataframe?
  • Cython: creating an array throws "not allowed in a constant expression"
  • Different thing is shown in html
  • sublimetext3 event for program exit
  • Join contigous tokens if the token includes "@" char
  • transparent background in gif using Python Imageio
  • Enable autologin into flask app using active directory
  • Make a NxN array of 1x3 arrays of random numbers (python)
  • django how to use Max and Count on the same field in back-to-back annotations
  • Using the OR operator seems to only take the first of two conditions when used with np.where filter
  • Elegant Dataframe Operations in Pandas
  • Change metadata of pdf file with pypdf2
  • How can I animate a set of points with matplotlib?
  • error: (-215) count >= 0 && (depth == CV_32F || depth == CV_32S) in function arcLength
  • OpenStack KeyStone SSL Exception When Creating an Instance of KeyStone
  • pyspark: The system cannot find the path specified
  • How can I set path to load data from CSV file into PostgreSQL database in Docker container?
  • Summation in python dictionary
  • DRF 3.7.0 removed handling None in fields and broke my foreign key source fields. Is there a way around it?
  • Error with Padlen in signal.filtfilt in Python
  • Abstract matrix multiplication with variables
  • Reading binary data on bit level
  • How to replace multiple instances of a sub strings in a string using a for loop (in a function)?
  • py2neo cypher create several relations to central node in for loop
  • [python-3]TypeError: must be str, not int
  • How to exit/terminate a job earlier and handle the raised exception in apscheduler?
  • python, print intermediate values while loop
  • python to loop over yaml config
  • D3.js is not recognized by PyCharm
  • Access the regularization paths obtained from ElasticNetCV in sklearn
  • Pattern table to Pandas DataFrame
  • Get the earliest date from a column (Python Pandas) after csv.reader
  • Get SystemError: Parent module '' not loaded, cannot perform relative import when trying to import numpy in a Cython Ext
  • Bash or Python : Append and prepend a string recursively in all .tex files
  • Changing a certain index of boolean list of lists change others, too
  • complex dataframe filtering request on the last occurence of a value in Panda/Python [EDIT]
  • How to repeatedly get the contents of a Text widget every loop with tkinter?
  • How to call the tornado.queues message externally
  • How can I use regex in python so that characters not included are disallowed?
  • Discarding randmly scattered empty spaces in pandas data frame
  • Get sums grouped by date by same column filtered by 2 conditions
  • Element disappears when I add an {% include %} tag inside my for loop
  • Django Rest Framework with either a slug or a pk lookup field for the DetailAPIView
  • Flask doesn't stream on Lambda
  • Generate all permutations of fixed length where the elements come from two different sets
  • Making function for calculating distance
  • How to handle multiprocessing based on the limit of CPU's
  • Django - static files is not working
  • Remove x axis and y axis black lines with matplotlib
  • shadow
    Privacy Policy - Terms - Contact Us © voile276.org