In this session, Abhijit Bose, Head of the Center for Machine Learning at Capital One, Alok Gupta, Head of Data at DoorDash, and Vidyaranya Devigere, Head of Algorithms at Overstock, share how they are transforming their data pipelines across collection, preparation, management, and development to better manage their data, improve ML throughput and create meaningful AI applications for their business.
Watch this session to gain insights on:
- Evaluating build vs buy decisions for ML Ecosystem Management
- Why a centralized MLOps platform is a strategic advantage to companies
- Tech stack used by the companies to build a centralized ML platform
- Challenges faced while building data pipelines
- Best practices to stay ahead of drift in ML systems
Here are 3 key takeaways from the session:
- Companies prefer having a centralized MLOps platform since it allows close collaboration and enables implementation of some of the best practices to deploy AI.
- One of the most common data challenges that businesses face is getting the online and offline features to agree. Models are often trained using stored data, but once in production, the features coming in real time must be recoded. Hence, the model’s training data and the data it receives during prediction time will differ.
- Human-in-the-loop will continue to play an important role in AI development, particularly in high-risk areas such as healthcare, manufacturing, and autonomous mobility.