Coding Challenge #105: Polynomial Regression with TensorFlow.js

Video is ready, Click Here to View ×


In this challenge, I expand the linear example into polynomial regression!

💻Challenge:

Links discussed in this challenge:
🔗 TensorFlow.js:
🎥 Linear Regression:

🚂Website:
💡Github:
💖Membership:
🛒Store:
📚Books:
🖋️Twitter:
Video editing by Mathieu Blanchette.

🎥 TensorFlow.js:
🎥 Intelligence and Learning:
🎥Coding Challenges:
🎥Intro to Programming using p5.js:…

sql tutorial for beginners with examples

42 Comments on “Coding Challenge #105: Polynomial Regression with TensorFlow.js”

  1. I was able to do your variable degree exercise by using an array of tensor scalars, but I couldn't figure out how to do it with the weights stored in a tensor1d, which seems like the natural way to store weights. Anyone get it working with the weights in that format?

  2. The "fancy high degree" polynomial you're looking for is Lagrange Interpolation Polynomial. Its degree is not that high (amount of points -1 ). And yes its use is often inappropriate. I think the choice of the degree depends of the context.
    "-What is the degree of the theoritical function that usually describes what you're studying?
    -Oh, it's the trajectory of a falling object so degree should be 2
    -Here we go"

  3. One of the insightful things I realised( which may not be such a big deal) is that the "a" term will end up being zero if there are only two points on the canvas. If more than two points are present( close to a line) but still don't define a line accurately, the "a" term is expected to be very very small.

  4. I do believe that I have an answer to your challenge of choosing a degree for the polynomial and then finding the regression. I've not very good with coding, but I hope I can relay the ideas enough that a coding solution could be found. First, all polynomials should be generated with the binomial theorem: ( N choose J ) * (ax + b) ^ (N – J).. "N choose J" is the number of combinations of choosing j-items out of n-items.. Or: N! / [ (N-J)! * (J)! ]. In this example, N is the degree of the polynomial and you iterate over J from 0 to N. Then there needs to be the same number of Polynomials generated as N. Place each polynomial into the row of an N x N Matrix. From here sum the coefficients for the respective terms and use these to draw the polynomial. Training would require the a and b terms to be tweaked for each polynomial. My only fear with this when dealing with an even root the leading coefficient can't be negative. maybe a negative needs to be hard coded somewhere?

  5. I wonder whether it might be helpful, with an eye to generalising to higher degree polynomials, to think not about

    (a×x^3 + b×x^2 + c×x + d)

    so much as

    (((a × x + b) × x + c) × x + d)

    so you can set up an array of coefficients and simply iterate through them applying .mul() and .add()

  6. Very nice video, as allways. I justwanted to mention that for instance in physiks linear regression is usually enough because most of the time you get funktion f~x^n and then you can plot your data over x^n insted of x which then should give you a linear distribution (if the formular and data are correct) 😀

  7. You can approach this using linear algebra instead so that the degree can be dynamic like you want it. Least squares is a method that uses a matrix to find the coefficients of your polynomial. It is exactly the method you need to use to be able to avoid overfitting as well (any set of n points can be fit by a polynomial of degree n-1, but it may not follow the real trend of your data because it is trying too hard to precisely fit the data given) , because the dimensions of the matrix are exactly equal to the degree of the polynomial that you attempt to use, and can be increased and decreased by hand or using error analysis.

    The formula that you use is inverse(A(tranpose)*A)*A(transpose)*vector_of_y_values, where the matrix A is the Vandermonde matrix where each row is the successive powers of each x value. This gives you the vector of your coefficients for that particular degree of polynomial.

    https://en.wikipedia.org/wiki/Vandermonde_matrix

    So instead of having to hard code adding new variables, you can dynamically change the size (drop down menu?) of a 2-D array for the matrix and a 1-D array for the vector of y-values used (this couldn't be larger than one less than the number of points that you've currently drawn) and then compute your coefficients that way. (Does js use arrays? I don't code in js)

    Then you'd just use your coefficents in a loop with your tensorflow adds and pow operations.

    P.S. I love that you nerded out doing polynomial regression even though most other people would say "Ew, math"

  8. Right after watching this video I was like "Pffff, I can do better than that. I will make it approximate the given points with a conic". For those of you who aren't math nerds: conics are generalization of ellipses, parabolas and hyperbolas. Copied the code, made some tweaks in it. All of a sudden I understand I got no idea how to draw a conic (ofc I can draw it pixelwise but that is clearly too slow). I found that bezier curves can easily help drawing a parabola, but nothing about hyperbola… Guess I'm stuck now.

  9. It would be nice to print the R^2 value as well; perhaps you could also use that to choose the order of the equation? Eg, if R<0.9, increase order by 1. It would also be interesting to plot R vs order

  10. Dan, I think the next natural step is to redo the Nature of Code with TF, with the rules of motion no longer dictated by the applyForce function, rather using the rules to train the AI to position the boids. The initial algorithm becomes the training wheels, and the AI takes over.

  11. It's starting to look like magic. The black box aspect of this most impressive demonstration is worrisome. It works, but you don't know how. It can tell you the difference between a cat and a dog, but not how it figured it out. This is where ethics become so important, because if I'm rejected for a loan by an AI, I'm going to want a reason, not just a result.

  12. Dan, thanks for this amazing tutorial. I was actually thinking to implement it myself and you noticed my comment asking about it in the chat. Cool, at least now I know how to do it.

  13. First of all, I absolutely love this series on tensorflow.js! Great job! I am learning a lot — I doubt if I would ever have delved into this topic without your hands-on demos.

    Anyway, having worked with programs that evalutate polynomials in the past, I thought I'd make a suggestion for efficiency and (IMHO) elegance:
    Notice that:
    ax^2 + bx + c can be written as ((ax + b)x + c — saving one multiply operation.
    and..
    ax^3 + bx^2 + cx + d can be written as (((ax + b)x + c)x + d – saving 3 multiply operations (if you count cubing as 2 multiply's).
    The higher the degree, the more multiply operations you save.

    But even better, look at how easy it is to add a degree to the polynomial. Just multiply the previous expression by x and add the next coefficient. And now there's no need to call the square function, etc.
    So, your predict code for one degree would be
    const ys = a.mul(xs).add(b) (I'm using a instead of m here, to demo the symmetry that follows)
    For 2nd degree (quadratic) it's
    const ys = a.mul(xs).add(b).mul(xs).add(c)
    For 3rd degree (cubic) it's
    const ys = a.mul(xs).add(b).mul(xs).add(c).mul(xs).add(d)
    Notice that there's only one level of parentheses making it easy to understand, and for each degree you just append a multiply and add operation.
    Whaddya think?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.