<img height="1" width="1" style="display:none" alt="" src="https://www.facebook.com/tr?id=367542720414923&amp;ev=PageView&amp;noscript=1">

    Not Found

  • 8:00

    Registration & Light Breakfast

  • 9:00

    Welcome Note & Opening Remarks

  • Anirudh

    CHAIR

    Anirudh Koul - Head of Machine Learning Data Sciences - Pinterest

    Arrow
  • Deep Learning Landscape

  • 9:15
    Vitor Guizilini-1

    Deep Learning for Physical Machines

    Vitor Guizilini - Senior Research Scientist (Machine Learning) - Toyota Research Institute

    Arrow

    - How can deep Learnings Success on the web to translate to intelligent machines physically interacting in the real world?

    - What is Principle-centric AI and how does it incread safety, robustness and efficiency?

    - How can computer vision leverage geometric principles in self-supervised objectives?

  • 9:40
    Xinyun Chen-1

    Learning-Based Program Synthesis

    Xinyun Chen - Research Scientist - Google Research

    Arrow

    - Is model scaling all we need for program synthesis?

    - How to model the semi-structured programming context for code generation in production?

    - How to leverage code execution for solving challenging programming tasks?

    - With the advancement of large language models, what are the remaining challenges of learning-based program synthesis?

  • 10:05
    Karl Willis-3

    Learning to Generate Shapes

    Karl Willis - Senior Research Manager - Autodesk

    Arrow

    - Shape generation is a rapidly evolving area for design and content creation with many different shape representations.
    - State-of-the-art methods typically model shape generation as a sequence prediction problem to produce editable output.
    - Challenges with shape generation remain, including generation of long sequences and hierarchical generation of complex shapes.

  • 10:30

    Coffee & Networking Break

  • The Future of Deep Learning

  • 11:00
    Richard Socher (1)-1

    The Future of Search and Generative AI

    Richard Socher - CEO - You.com

    Arrow

    The ability to find information and get things done online is more important than ever.
    Traditional search engines were conceived and designed when the internet and AI were in a different era. Instead, you.com incorporates recent AI breakthroughs in generating text, images, and code in a conversational way directly into search.

    In this keynote, we will explore the recent advancement in generative AI and its potential to transform the search engine space. We will discuss the challenges and opportunities this technology presents and its potential impact on how we access and consume information online.

     
    - How does Generative AI leverage machine learning algorithms to enable machines to create artificial content like text, image, audio and video?
    - How are common challenges such as data privacy and fraudulent activity managed?
    - Generative AI is not new, so why is it currently such a key topic?


    Richard Socher is the founder and CEO of You.com, the search engine that puts you in control — Your sources. Your time. Your privacy. Richard previously served as the Chief Scientist and EVP at Salesforce. Before that, Richard was the CEO/CTO of AI startup MetaMind, acquired by Salesforce in 2016. Richard received his Ph.D. in computer science at Stanford, where he was recognized for his groundbreaking research in deep learning and NLP. He was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research Ph.D. Fellowship in 2012 and a 2013 "Magic Grant" from the Brown Institute for Media Innovation, and the 2014 GigaOM Structure Award. He also served as an adjunct professor in the computer science department at Stanford. Outside of work, Richard enjoys paramotor adventures, traveling, and photography.

  • 11:50

    PANEL: Maximizing The Potential of Machine Learning

    Arrow

    - When should deep learning be considered and is it always the best approach?
    - What are the key elements to ensuring deep learning models can be deployed, monitored and managed correctly?
    - What should realistically be expected from applying deep learning models and do we expect too much?

  • Anirudh

    MODERATOR

    Anirudh Koul - Head of Machine Learning Data Sciences - Pinterest

    Arrow
  • Vipul Raheja-2

    PANELLIST

    Vipul Raheja - Research Scientist - Grammarly

    Arrow
  • Shubham Suresh Patil -3

    PANELLIST

    Shubham Suresh Patil - Staff Deep Learning Engineer - Stryker

    Arrow
  • 12:40

    Lunch

  • 2:00
    Shalini Ghosh

    Self-supervised Deep Learning for Automated Speech Recognition

    Shalini Ghosh - Principal Research Scientist - Amazon

    Arrow

    In this talk, we explore two projects at AlexaAI focused on learning global representations that are able to capture long-range correlations/semantics in data, while maintaining a focus on local targeted representations. In the first work, we explore how global multi-modal representations learned from video and audio can be incorporated into local-first ASR representation-learning frameworks. In the second work, we demonstrate how we can use self-supervised techniques to separate useful spoken content from confounding background signals. Finally, we sketch some directions for future research, showing how we can use these methods to improve model performance in ASR.

    - Global multimodal representations (e.g., audio + video + text) can help a task (e.g., ASR) even when only one modality (e.g., audio) is available at run-time
    - We can factor content and context in an unsupervised way to get clean signals for better ASR modeling
    - Self-supervised learning and auxiliary losses can be helpful in global+local representation learning and content/content factorization problems

     

    Shalini is a Principal Research Scientist at Amazon. She has extensive experience and expertise in Machine Learning (ML), especially Deep Learning, and has worked on applications of ML to multiple domains. Before joining Samsung Research, Dr. Ghosh was a Principal Computer Scientist in the Computer Science Laboratory at SRI International, where she has been the Principal Investigator/Tech Lead of several impactful DARPA and NSF projects. She was also a Visiting Scientist at Google Research in 2014-2015, where she worked on applying deep learning (Google Brain) models to dialog systems and natural language applications.

    Dr. Ghosh has a Ph.D. in Computer Engineering from the University of Texas at Austin. She has won several grants and awards for her research, including a Best Paper award and a Best Student Paper Runner-up award for applications of ML to dependable computing. Dr. Ghosh is also an area chair of ICML and serves on the program committee of multiple impactful conferences and journals in ML and AI (e.g., NIPS, KDD, AAAI, IJCAI). She has served as invited panelist in multiple panels, and was invited to be a guest lecturer at UC Berkeley multiple times. Her work has been covered in an interview by the ReWork Women in AI program.

  • Tools for Deep Learning

  • 2:25
    Jack Rae-4

    Techniques to Make Language Models More Useful

    Jack Rae - Team Lead - Open AI

    Arrow
  • 2:50

    Coffee and Networking Break

  • Deep Learning Innovations

  • 4:00
    Neil Shah

    Recent Advances in Scaling Graph Neural Networks

    Neil Shah - Lead Research Scientist - Snap

    Arrow

    Graph Neural Networks (GNNs) are a widely celebrated tool for solving complex graph modeling problems. Yet, despite their effectiveness on learning from graph data, GNNs are often unacceptably slow when facing industrial scale data due to the complex data dependencies they induce. In this talk, I will introduce several recent works from our team at Snap Research on improving the scalability of GNNs. More specifically, we develop simple, yet novel and effective methods on improving the training and inference speed of GNNs via cross-model model weight transference and knowledge distillation.

    - How do Graph Neural Networks (GNNs) solve complex graph modelling problems?
    - What are their drawbacks and what problems can this cause?
    - How have recent advancements improved this and how can this be utilised?

    Neil is a Lead Research Scientist and Manager at Snap Research, working on machine learning algorithms and applications on large-scale graph data. His work has resulted in 50+ conference and journal publications, in top venues such as ICLR, NeurIPS, KDD, WSDM, WWW, AAAI and more, including several best-paper awards. He has also served as an organizer, chair and senior program committee member at a number of these. He has had previous research experiences at Lawrence Livermore National Laboratory, Microsoft Research, and Twitch. He earned a PhD in Computer Science in 2017 from Carnegie Mellon University’s Computer Science Department, funded partially by the NSF Graduate Research Fellowship.

  • 4:25
    Aayush Prakash-1

    From Synthetic Data to Mixed Reality

    Aayush Prakash - Engineering Manager - Meta Reality Lab

    Arrow

    Synthetic data usage is growing as a significant solution to address the data scalability challenge of supervised deep learning. This is especially true when real data is difficult to acquire and/or annotate which is the case in multiple mixed reality related computer vision tasks. However, synthetic data can also be expensive to create as it requires domain experts to carefully curate 3D models and simulation results. Hence, scaling the generation of effective synthetic data is key to unlocking many user experiences within the Metaverse. We will review a few techniques in generation of synthetic data & domain adaptation and application to mixed reality.

    - Synthetic data has grown rapidly (50+ companies) in the last few years to solve the data needs of computer vision
    - Synthetic data suffers from certain challenges (domain gap) and there are methods to address them
    - Synthetic data plays an important role for unlocking key applications inside Metaverse "

    Aayush is an engineering manager who leads the machine learning team within the synthetic data organization at Reality Labs, Meta. His group works on problems at the juncture of machine learning, computer vision and computer graphics. They tackle challenges in domain adaptation, neural rendering and other sim2real problems for mixed reality. Before joining Meta, he was the head of machine learning at synthetic data startup, AI Reverie. Prior to this, he worked at Nvidia where I spent 6 years on synthetic data research in computer vision. While at Nvidia, his group delivered some of the prominent works in synthetic data creation. He graduated with a B.Tech in E&ECE from Indian Institute of Technology (IIT) Kharagpur, India, in 2010, and MASc in Computer Engineering from University of Waterloo, Canada, in 2013.

  • 4:50

    Networking Reception

  • 6:00

    End of Day 1

    Not Found

  • 8:00

    Registration & Light Breakfast

  • 9:00

    Welcome Note & Opening Remarks

  • 1156 - AI Summit West - Speaker Headshots (2)

    CHAIR

    Divyansh Agarwal - Senior Research Engineer - Salesforce AI Research

    Arrow
  • REINFORCEMENT LEARNING

  • 9:10
    Praneet-2

    Industrial Task Suite: Deep Reinforcement Learning for Industrial Cooling System Control

    Praneet Dutta - Senior Research Engineer - DeepMind

    Arrow

    This talk details the work in developing simulations for AI based industrial control . This is part of  his group's broader efforts to demonstrate real world energy savings in industrial cooling systems using Reinforcement Learning, published at NeurIPS RL in real world workshop. We cover the design choices for developing the `Industrial Task Suite` to enable research in this domain.

    - Design choices for developing a suite of Industrial simulators for AI control.
    - Experimentation of  Hierarchical Reinforcement Learning on efficient chiller switching to minimize energy wastage.
    - Challenges in real world Reinforcement Learning

     

    At Google DeepMind, Praneet is a Senior Research Engineer, having previously worked on AI for Industrial Controls in its Applied group. Praneet serves as part of the Technical Program Committee for leading AI venues  such as NeurIPS, ICML and ICLR among others. He is a member of the Confederation of Indian Industry's Artificial Intelligence Task Force. Prior to DeepMind, he was a  Machine Learning Engineer at Google Cloud. He holds an MS in Electrical and Computer Engineering from Carnegie Mellon University and is an alumni of the Stanford Graduate School of Business Ignite program.

  • PRACTICAL DEEP LEARNING APPLICATIONS

  • 9:35

    Deep Learning Utilization for Personalization at Albertsons / Safeway

    Arrow
  • Miguel Paredes-2

    SPEAKER

    Miguel Paredes - Vice President of AI & Data Science - Albertsons Companies

    Arrow
  • 1156 - AI Summit West - Speaker Headshots (1)-2

    SPEAKER

    Massieh Najafi - Senior Director of AI & Data Science - Albertsons Companies

    Arrow
  • 10:00

    COFFEE BREAK

  • 10:45
    Matthew Teo

    How LinkedIn uses Deep Learning to Help you Find People

    Mathew Teoh - Senior Machine Learning Engineer - LinkedIn

    Arrow

    If you’ve used LinkedIn, you’re probably familiar with one of its oldest capabilities: searching for other people. Sometimes you find people you know, sometimes it’s someone new. Regardless of who you’re looking for, LinkedIn wants to connect you to people you find the most interesting. How does LinkedIn do that?

    In this presentation, we will take a tour of the ranking models used in LinkedIn’s People Search, and we’ll explore the deep learning architectures in production today. In addition, we’ll take a look at some of the design decisions of the past, what we’ve learned along the way, and what we have in store for the future.

    - A look into LinkedIn's ranking models used in People Search
    - What deep learning architectures are in production and how does this shape the model
    - What lessons have been learnt in the process and where can we expect this to go in the near future? 

    Mat is passionate about helping others find what they need.

    As an ML engineer at LinkedIn, he develops search ranking algorithms, helping members find other people that are interesting to them. Before that, he built the NLP system at brain.ai, an early-stage startup that helps users shop by just saying what they need. And before that, he worked as a Data Scientist at Quora, analyzing experiments that helped users find answers to their questions.

  • 11:10
    Sudeep Das-1

    Enabling a Delightful Consumer Experience at DoorDash with Deep Learning 

    Sudeep Das - Data Science Manager - ML - DoorDash

    Arrow

    At DoorDash our mission is to empower local economies. As our offerings expand beyond made-to-order food delivery to new product verticals like groceries, convenience, and retail, so do the responsibilities of the machine learning algorithms that power seamless consumer experiences at every touchpoint in the product.  Personalization, search, and recommendations for low-in-stock item substitutions all play major roles  in the consumer facing experience, enabling fast basket building capabilities, delightful discoveries, and fulfilled orders. A foundation on which these algorithms rely is an accurate product catalog and taxonomy, where we also apply machine learning methods.  In this talk, we will take you through some of the high level concepts of how we are leveraging Deep Learning within each of these main areas of ML application at DoorDash. 

    - Personalization, Search and Substitution Recommendations are key pillars of a delightful consumer experience on DoorDash 
    - These algorithms rely on an accurate and dynamic product catalog and taxonomy
    - Machine Learning, esp Deep Learning has been a cornerstone for unlocking each of these areas

    Sudeep is an Applied Machine Learning Manager at DoorDash, where he leads Personalization, Search, Substitution and Catalog ML within the New Verticals. He was previously a Machine Learning Area Lead at Netflix, where his main focus was on developing the next generation of machine learning algorithms to drive the personalization, discovery and search experience in the product. Sudeep has had more than fifteen years of experience in machine learning applied to both large scale scientific problems, as well as in the industry. He holds a PhD in Astrophysics from Princeton University.

  • 11:35
    Alishba-2

    ML Applications for Accelerating Discovery of New Renewable Energy Storage Materials and Batteries

    Alishba Imran - Researcher - Berkeley Artificial Intelligence Research

    Arrow

    Deep learning methods such as graph neural networks and active learning approaches are being widely adopted for the discovery of novel renewable energy materials and batteries. These methods are able to predict materials properties, accelerate simulations and predict synthesis routes of new materials. In her presentation, Alishba will be highlighting key areas of development such as graph neural networks for property prediction in catalysts, crystals, the synthesis of novel materials and the importance of autonomous self-driving labs. 

     

    - GNNs can be utilized for a complete representation of materials on the atomic level while incorporating physical laws and larger scale phenomena.
    - Applications of GNNs for materials property prediction and graph-based networks for predicting chemical reaction pathways in synthesis of materials.
    - Importance of integrating autonomous self-driving laboratories that combine AI with automated robotic platforms for accelerating the discovery and validation of novel materials.

    Alishba Imran is a 19-year-old machine learning developer working on accelerating hardware/automation and energy storage.
    Currently, Alishba is managing ML perception and prediction projects at Cruise and has done research at NVIDIA on RL-based simulation. Alongside this, she is publishing a book with O'Reilly on her work. Previously, Alishba led ML research at SJSU/the BLINC lab to reduce the cost of prosthetics from $10,000 to $700 and led neuro-symbolic AI for Sophia the Robot, the world's most human-like robot. Alishba leverages machine learning and hardware with the goal of developing general machine intelligence and DL applications to discover new materials/batteries for climate. Currently she’s doing research at Berkeley AI Research Lab. Previously, she was managing a product division at Cruise and working on their AI team to develop novel sequence models for perception and prediction to power their self-driving cars. She was also the co-founder of a venture-backed startup Voltx, a software platform that utilized machine learning and physics simulations to accelerate battery testing and materials discovery for batteries. We piloted our work with the largest battery manufacturers/OEMs and reduced their lab to commercialization process from a few months down to just a few days.

  • 12:00

    LUNCH

  • PLENARY SESSION

  • 1:00
    Jeff CLune-2

    Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos

    Jeff Clune - Associate Professor of Computer Science - University of British Columbia

    Arrow

    Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities. However, for many sequential decision domains such as robotics, video games, and computer use, publicly available data does not contain the labels required to train behavioral priors in the same way. We extend the internet-scale pretraining paradigm to sequential decision domains through semi-supervised imitation learning wherein agents learn to act by watching online unlabeled videos. Specifically, we show that with a small amount of labeled data we can train an inverse dynamics model accurate enough to label a huge unlabeled source of online data -- here, online videos of people playing Minecraft -- from which we can then train a general behavioral prior. Despite using the native human interface (mouse and keyboard at 20Hz), we show that this behavioral prior has nontrivial zero-shot capabilities and that it can be fine-tuned, with both imitation learning and reinforcement learning, to hard-exploration tasks that are impossible to learn from scratch via reinforcement learning. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craft diamond tools, which can take proficient humans upwards of 20 minutes (24,000 environment actions) of gameplay to accomplish.

     

    Previously, Clune was a research team leader at OpenAI and also a senior research manager and founding member of Uber AI Labs, which was formed after Uber acquired their startup. Prior to Uber, he was the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming.
    Clune conducts research in three related areas of machine learning (and combinations thereof): Deep learning, Evolving neural networks, and Robotics.

     

    • I summarize our video pretraining (VPT) work, described in an OpenAI blog post: https://openai.com/blog/vpt
    • We extend the GPT paradigm of performing unsupervised training in large models on internet-scale data to learning from online video
    • Like GPT, VPT trains on internet data and can be fine-tuned with reinforcement learning: it performs at human-level on previously unsolvable tasks, here using a computer to do tasks that take humans over 20 minutes and over 24,000 actions 
  • 1:25
    Kenneth-1

    Why Greatness Cannot Be Planned

    Kenneth Stanley - CEO - Maven

    Arrow

    Kenneth O. Stanley is currently deciding his next adventure after most recently leading a research team at OpenAI on the challenge of open-endedness. He was previously Charles Millican Professor of Computer Science at the University of Central Florida and was also a co-founder of Geometric Intelligence Inc., which was acquired by Uber to create Uber AI Labs, where he was head of Core AI research. He received a B.S.E. from the University of Pennsylvania and received a Ph.D. from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, novelty search, POET, and ELM algorithms, as well as the CPPN representation, among many others. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems, coevolution, machine learning for video games, interactive evolution, quality diversity, and open-endedness. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, novelty search, Galactic Arms Race, POET, and MCC. His original 2002 paper on NEAT also received the 2017 ISAL Award for Outstanding Paper of the Decade 2002 - 2012 from the International Society for Artificial Life. He is a coauthor of the popular science book, "Why Greatness Cannot Be Planned: The Myth of the Objective" (published by Springer), and has spoken widely on its subject.

  • Panel Discussion:

  • 1:50

    What is the future of Deep Learning?

    Arrow

    - Beyond the hype and the buzz words, what are the current advancing areas in deep learning and what can we expect to see from these in the near future?
    - Is there a corralation between the expected research and the adopted use within industry and enterprise?
    - What are the current challenges and road blocks which could be potentially damaging to the advancement of deep learning? 

  • Aarti Bagul

    MODERATOR

    Aarti Bagul - Principal Machine Learning Solutions Engineer - Snorkel AI

    Arrow
  • Ipsita Mohanty-1

    PANELIST

    Ipsita Mohanty - Applied Scientist/Software Engineer, Machine Learning - Technical Lead - Walmart Global Tech

    Arrow

    Ipsita Mohanty is a Software Engineer, Machine Learning - Technical Lead, working on several key product and research initiatives at Walmart Global Tech. She has an MS degree in Computer Science from Carnegie Mellon University, Pittsburgh. Prior to her Masters' program, Ipsita worked as an Associate for six years, developing trading and machine learning algorithms at Goldman Sachs in their Global Market Division at Bengaluru & London locations. She has published work on Natural Language Understanding, and her research work spans across disciplines of computer science, deep learning, and human psychology.

  • 3:00

    End of Summit