in

Deep Learning 11: Linear Regression using Tensor Flow in Google Colaboratory



In this lecture we are implementing linear regression using TensorFlow.

#regression#tensorflow#colab

Share this:

What do you think?

72 points
Upvote Downvote

16 Comments

  1. shouldn't the cost under "if not epoch%40:" be calculated this way..

    cost_iter = sess.run(cost_iteration, feed_dict = {X: x_train, Y: y_train}) # for total cost

    instead of "feed_dict = {X: x, Y: y}?

  2. I am now following your great lecture, which is very helpful and easy to understand. However, due to the change to tensorflow 2.0 on Google colab, some of the code needs to be revised, so as to be used.

    For Lecture 11, it is simple to use the old version and it works.
    # import tensorflow as tf
    import tensorflow.compat.v1 as tf
    tf.disable_v2_behavior()

    However, for Lecture 12, when I followed exactly your lecture for executing the following two lines:

    sess.run(optimizer, feed_dict = {X: x, Y: y})
    summary_epochs = sess.run(merged_summary, feed_dict = {X: x, Y: y})

    The first line can run properly, but the second line will report some error as below. What confused me that they both feed the same parameters with tf.float32, but the first line accepts the parameters but the second line rejects the parameters.
    —————————————————————————
    InvalidArgumentError Traceback (most recent call last)

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
    1364 try:
    -> 1365 return fn(*args)
    1366 except errors.OpError as e:

    6 frames

    InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_12' with dtype float
    [[{{node Placeholder_12}}]]

    During handling of the above exception, another exception occurred:
    —————————————————————————

  3. Great tutorial! I appreciate you explaining the logic before jumping right into the code. Thanks.

  4. very straight forwards i wish we could use some studio instead of coding all this long lines

  5. Thanks Ahlad! These are great lectures. Please keep up the good work.

    I have a question about final iterations. I am not sure why this happens. See blow:
    I do this:

    with tf.Session() as sess:
    sess.run(init)
    for epoch in range(epochs):
    for x, y in zip(x_train,y_train):
    sess.run(optimizer, feed_dict = {X : x, Y : y})
    if not epoch%40:
    W1 = sess.run(W)
    B1 = sess.run(B)
    cost_iter = sess.run(cost_iteration,feed_dict = {X : x, Y : y})
    print('Epochs: %f Cost: %f Weight: %f Bias: %f' %(epoch, cost_iter, W1, B1))

    Weight = sess.run(W)
    Bias = sess.run(B)

    plt.plot(x_train,y_train,'o')
    plt.plot(x_train,Weight*x_train+Bias)

    But finally I get nan for all Cost function , Weight and Bias:
    Epochs: 0.000000 Cost: nan Weight: nan Bias: nan

    I wonder if you may know what I am doing wrong. Many thanks! Hossein

  6. Kumar, I must say that you have been trained by some very traditional Mathmatic teacher. The way you descrobed the LR using TF reminded my my College teacher. Good job and I am glad that someone took the pain to teach TF and LR in such a great way. Thank you and looking fwd to get more of it

1639012294_maxresdefault.jpg.webp

iOS 15.2 RC is Out! – What's New?

1639013016_600x600bb.jpg.webp

Chomper Plant’s Garden