The problem
Loom’s video processing pipeline was a growing cost center. Encoding times were increasing as video quality options expanded, and the existing architecture didn’t take advantage of modern hardware acceleration. Users were waiting too long for their recordings to be ready to share.
Our approach
We audited the existing pipeline and identified three main bottlenecks: redundant transcoding passes, underutilized GPU instances, and a queue architecture that couldn’t prioritize short videos. We rebuilt the pipeline with a priority queue system, consolidated transcoding into a single adaptive pass, and migrated to GPU-accelerated encoding on spot instances.
Results
Average processing time dropped 60%. Infrastructure costs fell 45% thanks to spot instance utilization and the elimination of redundant passes. The priority queue meant short videos (under 2 minutes) were ready to share in under 10 seconds.