Insights on Research
A Deep-Learning based Cost Model for Automatic Code Optimization’ from MIT
PeRSonAI workshops

Insights on Research

This year’s MLSys 2021 conference was filled with insightful research on the intersection of machine learning and systems. In case you missed it, we gathered all of the key points shared to keep you up to date on the latest research. Talks such as ‘A Deep-Learning based Cost Model for Automatic Code Optimization’ from MIT, PeRSonAI workshops, and “Tethics the ethics of AI are just a couple of examples of the standout topics covered. 

A Deep-Learning based Cost Model for Automatic Code Optimization’ from MIT

‘A Deep-Learning based Cost Model for Automatic Code Optimization‘ from MIT, has been able to accurately predict the speedups resulting from a huge combinatorial space of complex code optimizations to apply the best code transformation. To achieve that, they have taken a huge effort of generating a training dataset from scratch by applying random optimizations to random programs and then observing the speed-up of those optimizations by running them.

According to the Invited Talk of William Dally, who is the Chief Scientist and SVP Research at NVIDIA, the complexity of Deep Learning models doubled every “two months” from AlexNet (2012) to GPT-3 (2020). Whereas, the single-chip inference speed of Deep Learning hardware less than doubled “every year” until a big jump thanks to the Ampere architecture in 2020. Thus, non-hardware innovations like the one described above from MIT are also crucial in catching-up with the growing demand of the Deep Learning model complexity.

PeRSonAI workshops

MLSys 2021 conference personalized recommendation is the process of ranking and recommending content based on users’ personal preferences. Recommendation algorithms are central to providing personalized search results, marketing strategies, e-commerce product suggestions, and entertainment content. Given the pervasive use of personalized recommendations across many Internet services, state-of-the-art recommendation algorithms are using increasingly more sophisticated machine learning approaches.

Although Deep Learning has come a very long way in the last decade, only 2% of the Deep Learning publications are about Recommendation Systems and a large effort is still required for data-center scale, production-ready Recommendation Systems with Deep Learning. PeRSonAI workshop at MLSys was a great one for tackling this issue:

1. Tayo Oguntebi from Google focused on practical and real-world considerations involved with maximizing training speed of deep learning recommender engines. He introduced an interesting set of challenges about training deep learning recommenders at scale because of potential imbalances in computer and communication resources in many training platforms. In addition, he suggested best practices for efficient design points when tuning recommender architectures.


2. Qingquan Song from Texas A&M University talked about the use of Neural Architecture Search for Click-Through-Rate (CTR) prediction. By modularizing simple but representative interactions as virtual building blocks and wiring them into a space of direct acyclic graphs, they have evolved architectures learning-to-rank guidance and achieved acceleration using a low-fidelity model. Their empirical analysis among different datasets demonstrated a superior generalizability and transferability of evolved architectures than human-crafted ones.

Pin It on Pinterest