DL & RL advice
Here’s a collection of my favorite pieces of advice – things I’ve seen either in person or in various online lectures. The one theme I hear from everyone is to find ways to iterate faster. Whether by defining specific metrics, dealing only with subsets of the data, or first tackling easier problems, it’s always worth thinking about how to speed up the development process. As tempting as it can be, don’t spend all day tweaking hyperparameters or watching simulations run – take a step back and make sure you’re using your own time as well as possible.
Andrew Ng (from Machine Learning Yearning):
- Define a metric – one specific number that you can calculate and immediately decide whether a change you’ve made is helpful or not.
- Take the extra time to calculate where you should spend your time. Think about how hard it would be to collect more data. Calculate how useful it would be to clean up incorrect labels in a training set. Try to determine what causes the majority of the errors and address that problem first.
- Watch out for training set vs testing set mismatches (especially subtle ones – for time series data, make sure your training set doesn’t have access to information about the future).
Jeremy Howard (from FastAI):
- Create a sample data set to make iteration faster
- Use transfer learning and/or data augmentation as much as possible.
- When in doubt, try it out on an Excel spreadsheet
- Use Leslie Smith’s 1 cycle policy to set the learning rate for fastest and most accurate training.
- Run a minimalist baseline model (a plain dense neural net, or whatever is the most basic model applicable to the problem). Make sure your fancy model is at least as good as the plain one.
- When approaching a Kaggle competition, do a half hour *every* day. Each day get a little bit better. You’ll achieve more in the long run with consistency.
- When applying for a job, have one polished and fantastic project to show off, rather than many different half-baked ideas.
- Don’t trust deep learning code on the internet. It easily may be slow, buggy, or otherwise in need of upgrade.
- Be active on twitter. There’s a great machine learning community there, and it’s an easy way to keep up with the latest papers.
John Schulman (from the Nuts & Bolts of Deep RL Research):
- First test a new algorithm on an easy problem, something you know should work
- Try your algorithm on toy problems where you expect it will do best and worst, and where you have a guess of what should happen.
- Visualize the learning process: state visitation, value function
- Don’t overfit the algorithm to one problem. Simplicity generalizes better.
Andrej Karpathy (from Twitter):
- When training a neural net, first try to overfit a single batch
Do you have general advice for approaching deep learning or reinforcement learning problems? Please add your suggestions in the comments below! (And if anyone I’ve listed above thinks I’ve misrepresented their ideas, or if you have other thoughts I should add, please let me know and I’ll update immediately!)
My advice from my machine learning journey so far?
- To test if you know a topic well, try explaining it to someone who doesn’t know machine learning. Then try explaining it to a six-year-old.
- Assume there could be a bug in anything and everything you write. Often bugs will show up only as a slight decrease in accuracy or speed. As often as possible, test that small code blocks do indeed output what you expect.
- This isn’t mine, but a favorite I’ve heard: When stuck on a problem, you must first try for 30 minutes to solve it yourself. After 30 minutes, you must ask for help.