[link]
This paper proposes two parallel inference algorithms for the Hierarchical Dirichlet Process (HDP) in a distributed/cluster setting. The proposed algorithms use a twolevel approach to parallelization where the top level involves distributing the data to individual processors/machines across a symmetric multiprocessing (SMT) cluster and the second level utilizes existing algorithms earlier developed for parallel inference in HDP based models on each machine. The first algorithm uses the approximate distributed HDP (ADHDP) algorithm of Newman et al (2009) whereas the second algorithm uses the Parallel HDP algorithm of Asuncion et al (2008). The proposed algorithms are compared against a full MPI implementation based on ParallelHDP and are shown to scale better w.r.t. increasing the number of cores. The paper is pretty much a simple extension of the existing algorithms for distributed HDP by simply reusing then on each machine and synchronizing. There is no discussion whether the existing algorithms (ADHDP and Parallel HDP) could further benefits from the the SMT architecture, discussion about possible communication strategies among processors, or how certain issues such as merging of topics are dealt with (visavis the ADHDP). Some discussion would be nice.
Your comment:
