Thursday, November 5, 2009

Part 1 - Pipelining and parallelism

This post is about data parallelism, pipelining and geometric decomposition. First, about pipelining. One of the interesting aspects is regarding the throughput of the data processed by the pipeline. An analogy with a real world example is from my personal experience working on video processing. To render a 3D video from multiple stereooptic cameras, the required video is obtained smoothly only if the video streams at each camera are combined together to obtain the specific frame rate. Although this is not an strictly parallel programming effort, I can find some similarities here. Another interesting aspect of this paper is the point made about the slowest computation creating a bottleneck in the entire concurrent pipelining. Plugging-in of objects or frameworks to the pipeline is another interesting aspect mentioned in the paper. The second part is about data parallelism. I came across this article which talks about data parallelism. Sometimes this method of processing data utilizes the capacity of a multi-core to the maximum extent. At the end of the post, they talk about a real life implementation of this in an advanced German nuclear fusion platform, the ASDEX tokamak. The last part is about the Geometric Decomposition Pattern. I feel that one of the most important elements of the solution is the data decomposition part especially about the granularity of the decomposition. I feel that another important aspect of this is the need to balance decomposition vs code reuse especially in larger systems. Though I have actually worked with this kind of pattern I would love to hear more instance of this, through other real-world examples.

No comments:

Post a Comment