From the 60s to the end of the 80s, we have gone from a remarkable progress in parallel computing to a significant down turn which occurred in the early 90s. This had the leaders in the field wonder whether Parallel Computing had been a dead end. However, since the mid 2000s, we have witnessed a remarkable comeback with the advent of multicore architectures – some have called it the “second spring” of parallel computing – and the trend has continued. However, it is a good time to stop and ponder the lessons we have already learned from the history of parallel computing. One important reason is, although much good work on parallel models of computation is ongoing, there is no commonly accepted program execution model for general-purpose parallel computing. Consequently, the so-called von Neumann bottleneck remains a fundamental challenge for parallel architecture and system model design.
The Parallel Model and System: Data Flow and Beyond STC will focus on all manners of directions derived from dataflow, combining all aspects of technology needed and exploiting the achievements from areas ranging from hardware, system, algorithm to applications, making this STC the hub to put all Dataflow research together to construct an ecosystem of Dataflow-based next generation parallel models and systems, whether this be in a high-performance computing system or an energy conscious embedded mobile system.