Learning about Machine Learning: looking back on 2017 & resolutions for 2018

Mark Ryan
5 min readJan 1, 2018

Exactly one year ago I made a commitment to myself to learn as much as I could about Machine Learning. Through the course of 2017, in the midst of meeting commitments to my family and my job, I have spent many hours studying, exploring, and coding. I’ve had moments of bliss when an insight came into focus or a challenging piece of code finally worked. I’ve also had many points of frustration when the demands of my day job prevented me from making progress, or when I was unable to tackle a coding challenge. As 2017 comes to an end I have learned enough about ML theory, Python, and ML frameworks to stand a decent chance of making a breakthrough, but my key goal for 2017(applying deep learning to solve a non-trivial problem in my job) still eludes me. As I look forward to 2018, I want to share what worked and didn’t work for me in 2017, as well as the opportunities and risks I see for success in ML in 2018.

What worked in 2017:

  • Setting aggressive targets for the first step — I began 2017 by taking Andrew Ng’s excellent intro to ML. I decided to take advantage of January 2017 to cram in as much of this course as possible. By mid-February I had completed the course. This set a great pace for the year, and gave me some of the basic building blocks I needed to make further progress.
  • Using Data Science Experience (DSX) — I work for IBM, so it was natural for me to use DSX as a starting platform for exploring ML. I can say without bias that DSX was a perfect starting point for me — web-based so no time wasted on setup, everything I needed in one place, super simple sharing, and plenty of working code examples to exploit.
Data Science Experience — a great place to get started
  • Committing to a conference presentation- I had the opportunity to pitch a basic ML presentation at the IDUG EMEA conference in October. This commitment forced me to complete a very simple code example in Python using linear regression to answer a simple day-to-day question from my job. It also forced me to fill in gaps in my understanding — it’s a cliche, but nothing tests your understanding of concepts as much as having to explain them to an audience of strangers.
  • Some Teamwork — much of my exploration of ML has been solo. However, I have taken advantage of some opportunities (including an internal IBM competition) to work with and learn from others.
  • ML to DL — After I finished Andrew Ng’s intro course, I committed to exploring deep learning, which presented a dilemma in itself. Nevertheless, I think the investment of time in understanding neural networks, CNNs and RNNs will yield more benefits than circling back to do a more comprehensive investigation of other standalone ML algorithms.

What didn’t work in 2017:

  • Not enough Teamwork — while I did get some benefit from the insights of others in 2017, I would have learned faster by working more with others. First, there is an aspect of accountability that comes from working on a team. Keeping a side project on track is a real challenge if you’re only accountable to yourself. Second, working with a team speeds up learning and exposes gaps in understanding.
  • Underestimating how hard ML is to master — after the thrill of getting a basic understanding of the “secret sauce” of ML in the form of gradient descent / backpropagation, it was tempting to think that it would be easy to unlock the potential of ML. After some humbling exposure to real masters like Jeremy Howard, I now appreciate how applicable Malcolm Gladwell’s “10,000-Hour Rule” is to ML.
  • Trying to master ML as a sideline — my day job has plenty of interesting potential applications for ML, but ML is not central to it. By contrast, I am aware of peers who have jobs where ML is central to the job, not a sideline. These peers are making faster progress towards ML mastery. To be honest, some of the difference in progress is down to my peers starting with better skills and having more raw horsepower, but I contend that these peers also benefit from having a day job that both demands and rewards ML competency.
  • Too much stretching, not enough running — looking back on 2017, I see a pattern where I have defaulted too often to the “soft” options for my ML time (passive coursework, tweaks to code projects that are already good enough, and, um, writing non-technical blog entries) rather than “hard” options (serious work on preparing data, exercising new ML platforms, and new coding projects). It’s like being a runner and spending too much time stretching instead of biting the bullet and getting on with the run.
  • Too much streaming content — ML needs to fit into the time that isn’t already consumed by my other commitments. This mean it competes with other uses for this time, including Netflix and Amazon Prime. If I’m serious about making real progress in ML, I need to resist the temptation of another episode of The Man in the High Castle. This temptation is particularly hard to resist for when I don’t have a specific ML commitment to meet. Firm external commitments like conference presentation or contest submission deadlines help keep me focused when the streamed content is beckoning.
The enemy of ML progress

Using what worked and didn’t work in 2017 as a guide, here’s my list of ML resolutions for 2018:

  • Tackle the unfulfilled goal from 2017: apply deep learning to a problem in my day job. I have a couple of good candidate projects — I just need to focus on cranking out some working prototypes.
  • Reinforce what worked in 2017 in terms of process, in particular teamwork and external commitments
  • Write at least one technical blog entry

All in all I’m very grateful for the opportunities I’ve had to learn about and apply ML in 2017. I’m looking forward to learning more about ML in 2018, and I hope the above resolutions will help make the best use of the time I can spend on building up my ML competency.

--

--

Mark Ryan

Technical writing manager at Google. Opinions expressed are my own.