Example: bankruptcy
Spark: Cluster Computing with Working Sets - USENIX

Spark: Cluster Computing with Working Sets - USENIX

Back to document page

a dataset, Spark will recompute them when they are used. We chose this design so that Spark programs keep work-ing (at reduced performance) if nodes fail or if a dataset is too big. This idea is loosely analogous to virtual memory. We also plan to extend Spark to support other levels of persistence (e.g., in-memory replication across multiple ...

  Multiple, Spark

Download Spark: Cluster Computing with Working Sets - USENIX

15
Please wait..

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Related search queries