Thursday, October 15, 2009

Review of Chess

This paper presents a tool for finding Heisenbugs in concurrent programs. A systematic exploration of program behavior is one of the important features of Chess enabling it to find bugs in a short time. One of the first things that strikes me about this paper is the fact that the authors have described two real bugs that have been found using Chess, and also how they were fixed by the tool. Another smart aspect in the tool is the control over thread execution. Even though the choice of algorithms for the search phase is a challenging problem, I think Chess handles this phase well.

Choosing to replay a given concurrent execution in an appropriate manner according to the tool is a goal which has been described very well in this paper. By providing some kind of a
'restart' through its cleanup functions, Chess ensures that every run of the program is handled well. There is also a subtle focus towards leaving some choices to the user especially in terms of delivering deterministic inputs, and is one of the interesting aspects of this work.

Tuesday, October 13, 2009

BA Chapter 14

This chapter gives a rather nostalgic overview of the basic architectural ideas behind the design of Smalltalk. The question of inheritance and its good/bad uses is addressed initially, and its support comes from the description of Smalltalk itself. Earlier, I came across a post describing how Smalltalk might make its comeback. The author in that post says that Smalltalk was ahead of its time, and now that OO concepts are firmly implanted in the heads of every beginning programmer, Smalltalk might be an option to consider. Many of the benefits espoused by languages such as Python, Ruby etc have been around earlier in Smalltalk and I think many people (who are unfamiliar) overloook this aspect. People who like concepts such as metaprogramming, dynamic typing etc will find that Smalltalk is similar to their needs. The regrowth in popularity of Smalltalk might be just a temporary fad as much of the earlier "problems" have been solved by languages such as Java with the growth and advancement of hardware and by software which could deliver "executables". With the recent growth in web applications, I think Smalltalk frameworks such as Seaside should become popular as well.

Wednesday, October 7, 2009

Review of ReLooper

This paper describes the ways in which an eclipse based tool called as ReLooper helps programmers help parallelize their programs. It makes use of the ParallelArray data structure for arrays. This paper gives an emphasis on determining when parallelizing a program would be unsafe (thread-unsafe). A good deal of user interactivity is, I think, one of the important features of ReLooper. The study evaluation for determining whether the refactored programs have conflicting memory access is useful. From the research evaluation it seems clear that reporting all possible race conditions (or not having false negatives) is one among the several benefits of this work. There also seems to be a good deal of speedup achieved (for both 1 and 2-core) as compared to the original code. The methodology of evaluation, by no means comprehensive, look sufficient to show the inherent advantages of using ReLooper. I tend to disagree with some of the evaluation results on the time taken (especially for the machine learning algorithms). It is hard to compare in the absence of other techniques (for comparison purposes). However with all the inherent advantages of ReLooper, I do not see a lot of practical applications for this as many real-life programs are much more complicated, and it remains to be seen how would research progress from this stage to the point where many different types of code could be refactored in a similar manner.

Tuesday, October 6, 2009

Software Architecture: OO Vs Func.

This chapter provides an interesting study of the benefits/disadvantages of functional programming vs object oriented design. Before delving into the details presented in this paper, I would like to talk about a similar article that I came across a few days back. This article provided motivation to take up programming in Haskell for a programmer who is used to programming in OO languages like C++, Java, C# and so on. The author makes the case for Haskell by saying that functional programming provides immutable objects, higher order functions, inclusional polymorphism and the fact that functional programmers typically are concerned with the question about how is data constructed rather what to do with the data (which is the case for OO programmers). Another article on similar lines describes the case for Haskell programming language.

One of the things that I would like to point out on reading the chapter is that the same metrics cannot be used for comparing the two different programming styles. The argument of considering better modularity of code(as a criteria for comparing programming styles) is valid especially while developing large software systems. Sticking together of these functions by "glues" is an interesting aspect, and even though the specifics are clearly mentioned, the details should be a little more subjective.

Refactoring Sequential Java Code for Concurrency via Concurrent Libraries

This paper presents a way for restructuring sequential code into parallel code using concurrent utilities. It makes use of the java.util.concurrent framework in Java5 and the ForkJoinTask Framework in Java7. I liked the argument that programming with locks in error-prone. Though I believe this is a contentious topic, I tend to agree with the authors' comments on this. The frameworks mentioned in the paper have addressed the research issues of usefulness, of making the existing code thread-safe, and finally of efficiency. The evaluation methodology used by concurrencer is pretty much comprehensive, and I doubt that many questions could be raised about this.

In the implementation, I liked the concept of ConcurrentHashMap , which avoids locking of the entire map.The number of ways in which refactoring is supported is pretty impressive and allows for parallelizing of different kinds of sequential code patterns. Another important aspect of this paper is that they have accounted for human intervention while retrofitting parallelism into sequential code. In this way, I believe they have accounted for future changes in code (especially if drastic changes have to be made in the code). It is also interesting to note that among the three conversions studied in this paper, only ConcurrentHashMap requires a degree of human intervention.

When the bazaar sets out to build cathedrals

In this chapter, the authors describe the inter-relationships between ThreadWeaver/Akonadi with the KDE project, and the development process in large scale open-source projects. One of the first things that struck me before the authors described these two projects in KDE was the way in which the focus is on developing and maintaining quality code within open-source projects. The authors have hit the nail in the coffin when they allude to the fact that open-source code is mostly of a higher quality than proprietary code. This fact can also be seen from the huge success of community open-source development programs like the Google Summer of Code. The motivation of just "reaching the finish line" rather than money or fame seems to work really well.

The Akonadi project by itself has several benefits, and the one of the interesting things that I noticed in its design was the focus on maintaining the stability of the overall system by way of providing separate processes for components requiring access to a certain kind of storage backend. Linking together of components with third-party libraries without compromising the stability of the overall system is also one aspect which seemed advantageous in the initial design of the Akonadi architecture. The authors have mentioned a series of code optimizations which seemed potentially useful, and even though I have not done much research into whether these have been implemented, I am interested in knowing the outcome of this. Does anyone know more about these optimizations?

Tuesday, September 29, 2009

Java Fork/Join Framework

In this paper, the authors demonstrate the feasibility of developing a pure Java implementation of a scalable, parallel processing framework. It is expected that JDK 7 will include this framework in its functionality. Fork/ Join algorithms are similar to divide-conquer algorithms and include a series of steps where a goal is broken down into a smaller set of components and the individual components are computed upon and finally the results are merged.

In the experimental results mentioned in the paper regarding the performance of Java Fork/Join versus other similar techniques in other techniques yield some observations which are a bit unclear to me. For example, the results about relatively similar speedups in performance inspite of increase in number of threads. Even though application-specific reasons are mentioned by the authors, it is likely that there are other fundamental reasons for this.
when This framework was announced for JDK 7, there was a huge deal of excitement in the developer community. I am not sure if the framework still commands as much enthusiasm as it earlier especially now that the JDK early access downloads are out. Is the framework incorporated in this release? Can someone check this?