How Machines Learn

Video is ready, Click Here to View ×


How do all the algorithms around us learn to do their jobs?
**OMG PLUSHIE BOTS!!**:

Bot Wallpapers on Patreon:

Footnote:

Podcasts:

Thank you to my supporters on Patreon:

James Bissonette, James Gill, Cas Eliëns, Jeremy Banks, Thomas J Miller Jr MD, Jaclyn Cauley, David F Watson, Jay Edwards, Tianyu Ge, Michael Cao, Caron Hideg, Andrea Di Biagio, Andrey Chursin, Christopher Anthony, Richard Comish, Stephen W. Carson, JoJo Chehebar, Mark Govea, John Buchan, Donal Botkin, Bob Kunz

How neural networks really work with the real linear algebra:

Music by:

22 Comments on “How Machines Learn”

  1. It sounds a lot like human selection in animal breeding. Back in the days, we didn’t understand how horses’ brains worked either. However people who were wealthy enough to have many horses could use specialised horse races for agriculture and communication. Or war, to oppress people who didn’t have the money to support a stable. I don’t really see the risk of singularity though – why would anyone actually need a bot / algorithm that is capable of what humans are capable of? We have humans for that. It’s much better to have a bunch of extremely specialised bots able to constantly improve themselves to always try to beat their improving rivals in an absurdly specific task.

  2. Hello, I just wanted to say theodd1sout grew up to me an AMAZING guy. And although he said he didn’t learn anything, but I think that he learned a lot from u because he talks to millions of people without worrying.

  3. Great video, but man, you mixed learning with use of gradient slope and evolutionary algorithm. Not like this is wrong, but usually programmers using evolution or gradient, not both.

    If you wonder why, the reason is simple:

    Gradient slope can "directly" lead you to "best" neural network settings – you just pushing its weights to values which will make network answer the test correctly, but this will require a big database of, well, data (images for example), otherwise you'll not be able to calculate gradient (direction of learning). As you can see, you really don't need to "keep some of best and kill the worst", you don't need evolutionary algorithm (but actually maybe it makes sense to use it)

    Evolutionary algorithm is not even learning in terms of our intuition, it's really evolution. You creating a population of, say, 100 networks with different settings, then they'll try to beat your "test" (some task), and then you calculate how good they are (their fitness value), and then you kill the worst, and you not just keep the best, you breed them (just mixing their genes (settings) together, gene can stand for weight value). Also you mutate them (randomly change some genes values) to add more variety to your population. As you can see, it's not necessary to have database on this one if you can just calculate how good they are.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.