A Distributed Computing Network
Sandstorm is a way to run a job parallelized across several compute resources at the same time. Big Data processing, long running jobs, recurring jobs… all are a good fit for The Sandstorm Network.
It’s faster than running your job on a single computer.
It’s easier than setting up your own distributed computing cluster and maintaining the workload.
More languages will be added in the future. If you have need for a language not on our list, let us know and we’ll work with you to add support for your language.
You provide us with two things: your data set and a function. The data can be a single large corpus or it can be an incoming stream. The function must be stateless. Together, a function and a block of data is a job.
We send out all of your jobs to computes in The Sandstorm Network. The computes run the job and then sends the result back.
The process is repeated across all of The Sandstorm Network until all of the jobs are done. We report the results back to you — either in aggregate at the end or as a stream during the process.
Our network is an ad-hoc amalgamation of different job computes.
This diversity of tactics allows us to run jobs in the environments that best suit them.
We’re currently looking for beta customers. Have a problem that needs solving with on-demand distributed computing? Let us build something for you! Get in touch, we’d love to talk with you.
How much? We’re not sure yet. We’re still working on it. If you’ve got a job that you’d run on The Sandstorm Network, how much would you pay to run the whole job? How much per block of data? How quickly would you need the results?