concurrency - Why is there no implicit parallelism in Haskell? -
haskell functional , pure, has properties needed compiler able tackle implicit parallelism.
consider trivial example:
f = <- 1 b <- $ 2 -- ^ above line not utilize `a` variable, can safely -- executed in parallel preceding line c <- b -- ^ above line references `b` variable, can executed -- sequentially after homecoming (a, c) -- on exit monad scope wait computations finish , -- gather results
schematically execution plan can described as:
| +---------+---------+ | | <- 1 b <- $ 2 | | | c <- b | | +---------+---------+ | homecoming (a, c)
why there no such functionality implemented in compiler flag or pragma yet? practical reasons?
this long studied topic. while can implicitly derive parallelism in haskell code, problem there much parallelism, @ fine grain, current hardware.
so end spending effort on book keeping, not running things faster.
since don't have infinite parallel hardware, picking right granularity -- coarse , there idle processors, fine , overheads unacceptable.
what have more coarse grained parallelism (sparks) suitable generating thousands or millions of parallel tasks (so not @ instruction level), map downwards onto mere handful of cores typically have available today.
note subsets (e.g. array processing) there automatic parallelization libraries tight cost models.
for background on see, http://research.microsoft.com/en-us/um/people/tharris/papers/2007-fdip.pdf , introduce automated approach insertion of par
in arbitrary haskell programs.
haskell concurrency parallel-processing compiler-optimization
No comments:
Post a Comment