Currently Richie is leading artificial intelligence with his colleagues in Ensemble Capital, an AI hedge fund based in Singapore. He is also an NVIDIA Deep Learning Institute instructor enabling developers, data scientists, and researchers leverage on deep learning to solve the most challenging problems. In his leisure time, he dives into deep learning research, in particular Visual Question Answering (VQA) with researchers in NUS and MILA.
Richie's passion for enabling anyone to leverage on deep learning has led to the creation of Deep Learning Wizard where he has taught and still continue to teach thousands of students in more than 60 countries around the world.
He was previously conducting research in deep learning, computer vision and natural language processing in NExT Search Centre led by Professor Tat-Seng Chua that is jointly setup between National University of Singapore (NUS) and Tsinghua University and is part of NUS Smart Systems Institute. He managed to publish in top-tier conferences and workshops like ICML.
by Head of Deep Learning, Ensemble Capital
With every deep learning algorithm comes a set of hyperparameters, and optimizing them is crucial in achieving faster convergence and lower error rates. The majority of people working in deep learning use common heuristic methods to tune hyperparameters, such as learning rates, decay rates and L2 regularization.
Recently researchers have tried to cast hyperparameter optimization as a deep learning problem but they are often limited by the lack of scalability. I will be showing how scalable hyperparameter optimization is possible to accelerate convergence, and can be trained on one problem while enjoying the benefits of transfer learning that is scalable. This has impact on the industrial level where deep learning algorithms can be accelerated to convergence without manual hand-tuning even for large models.