Good processes and toolings allow companies to adapt and strive with changing business requirements and short development cycles. Using Dagster, a new workflow engine, and PyTorch, we found the perfect match to guarantee reliable deployments with fast-changing models. Join us and find out how!
Extending a deep learning model beyond your dataset to make it available to multiple use-cases and stakeholders requires a structured workflow. Finding the proper workflow and tooling depends on your use case, but what happens when your use case is dynamic?
In a start-up, changing business requirements are standard, so change the inputs, outputs, and model types: the pipeline may be outdated by the time you have put a “model in production”.
With this talk, we want to show a pipeline based on Dagster, deployed on Kubernetes, and PyTorch, that adds little overhead when making models available to colleagues and projects with minimal requirements, making it flexible to handle very customized workflows if needed.