Learning about deep learning
We've had a number of posts here at HTC that have focused on deep learning neural networks. How you might go about training and deploying them. What you might think about doing with them. How exactly do they work.
But there has been this pre-assumption that you kind of understand what the hell we are talking about. But what if you are just getting started? How do you get a leg up to understand things well enough to try out some of this exciting stuff yourself.
Stanford University has a very interesting course in their undergraduate CS curriculum called CS230 Deep Learning. Course staff currently includes Andrew Ng no less. In this course you learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more.
Sounds sweet. Now at this point you may be lamenting the fact that you don't have whatever large sum of money it takes to get into Stanford, nor necessarily the time and academic background needed to pull that off.
But fear not. Everything you need to take this course is available for you online. For free. It's all there.
So let's take a look at the Syllabus.
There are a series of different Lectures. The slides are available for each lecture to view and download as pdfs. So why don't you grab all of those now and put them in a folder on your computer.
There are online modules associated with almost all of the lectures. They also include additional associated lecture slides to view and download as pdfs. So why don't you grab all of those now and put them in a second folder on your computer.
Now don't forget the optional readings links on that page. Because they point you are some great articles to read, and guess what, they are pdfs you can view and download. So why don't you grab all of them and stuff them in another folder on your computer.
So, there are also Instructions links on that Syllabus page. Take a look at them. They are describing different class projects the students need to pull off implementing. So you are going to have to suss that out yourself, since you are not a Stanford student, and don't have access to their online portal.
Now you can certainly learn enough from just reading through all of the available material we have just discussed to way better educate yourself about deep learning neural networks. But at some point you are going to have to get your hands dirty actually working with this stuff for real. And the class projects they talk about in the slides for that first lecture in the second half of the lecture are very interesting for the most part. So look at them, decide which ones interest you, and then approach learning the material from the standpoint of 'how am i going to implement that project'.
The one advantage the Stanford students have over you is that they are obviously hooked up with a repository of hidden secret knowledge about how to actually work with the nuts and bolts of implementing deep learning neural net systems. If i find a link on the course site to that secret knowledge you can access, i will pass it along.
But don't despair. Because one of the things we are trying to do here at HTC is put together that secret hidden knowledge in an easy to understand and useful form inside our HTC Toolkit (for members).
What tools do your need?
How the hell do you successfully instal them on your particular computer?
What are the best pathways to proceed to build, train up, and deploy for different classes of problems?
How can we take things people have already successfully implemented and repurpose them to be used for other things?
And you certainly don't need us to figure it out. Your friend the web browser of choice and google search await you. And what a journey it will be. Like 40 different 'here's how to install' instructions for whatever small piece of the puzzle you are trying to figure out, all of them different, none of them really doing what you need to actually accomplish.
Of course all of the information is there, hiding somewhere.
You will also find links that lead you into information sites that ultimately are trying to sell you some kind of an online course or book. Some of them are actually pretty good, so we'll be discussing them in some later posts. I wanted to focus on the free Stanford course first, since the barrier to entry is nothing, and there is a ton of good material there to soak up and absorb.
But there has been this pre-assumption that you kind of understand what the hell we are talking about. But what if you are just getting started? How do you get a leg up to understand things well enough to try out some of this exciting stuff yourself.
Stanford University has a very interesting course in their undergraduate CS curriculum called CS230 Deep Learning. Course staff currently includes Andrew Ng no less. In this course you learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more.
Sounds sweet. Now at this point you may be lamenting the fact that you don't have whatever large sum of money it takes to get into Stanford, nor necessarily the time and academic background needed to pull that off.
But fear not. Everything you need to take this course is available for you online. For free. It's all there.
So let's take a look at the Syllabus.
There are a series of different Lectures. The slides are available for each lecture to view and download as pdfs. So why don't you grab all of those now and put them in a folder on your computer.
There are online modules associated with almost all of the lectures. They also include additional associated lecture slides to view and download as pdfs. So why don't you grab all of those now and put them in a second folder on your computer.
Now don't forget the optional readings links on that page. Because they point you are some great articles to read, and guess what, they are pdfs you can view and download. So why don't you grab all of them and stuff them in another folder on your computer.
So, there are also Instructions links on that Syllabus page. Take a look at them. They are describing different class projects the students need to pull off implementing. So you are going to have to suss that out yourself, since you are not a Stanford student, and don't have access to their online portal.
Now you can certainly learn enough from just reading through all of the available material we have just discussed to way better educate yourself about deep learning neural networks. But at some point you are going to have to get your hands dirty actually working with this stuff for real. And the class projects they talk about in the slides for that first lecture in the second half of the lecture are very interesting for the most part. So look at them, decide which ones interest you, and then approach learning the material from the standpoint of 'how am i going to implement that project'.
The one advantage the Stanford students have over you is that they are obviously hooked up with a repository of hidden secret knowledge about how to actually work with the nuts and bolts of implementing deep learning neural net systems. If i find a link on the course site to that secret knowledge you can access, i will pass it along.
But don't despair. Because one of the things we are trying to do here at HTC is put together that secret hidden knowledge in an easy to understand and useful form inside our HTC Toolkit (for members).
What tools do your need?
How the hell do you successfully instal them on your particular computer?
What are the best pathways to proceed to build, train up, and deploy for different classes of problems?
How can we take things people have already successfully implemented and repurpose them to be used for other things?
And you certainly don't need us to figure it out. Your friend the web browser of choice and google search await you. And what a journey it will be. Like 40 different 'here's how to install' instructions for whatever small piece of the puzzle you are trying to figure out, all of them different, none of them really doing what you need to actually accomplish.
Of course all of the information is there, hiding somewhere.
You will also find links that lead you into information sites that ultimately are trying to sell you some kind of an online course or book. Some of them are actually pretty good, so we'll be discussing them in some later posts. I wanted to focus on the free Stanford course first, since the barrier to entry is nothing, and there is a ton of good material there to soak up and absorb.
Comments
Post a Comment