![]() model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.īeyond data movement and ETL, most #ML centric jobs (e.g. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. ![]() Apache Spark on Yarn is our tool of choice for data movement and #ETL. We store data in an Amazon S3 based data warehouse. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. The algorithms and data infrastructure at Stitch Fix is housed in #AWS. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |