## David Mimno Topic Modeling Bibliography For Websites

**Evaluating the Stability of Embedding-based Word Similarities.** Maria Antoniak and David Mimno. TACL (6) 2018, 107--119 PDF

Word embeddings are unstable, especially for smaller collections. Lists of most-similar words can vary considerably between random initializaions, even for methods that appear deterministic. For more reliable results, train embeddings on at least 25 bootstrap-sampled corpora and calculate average word similarities.

**Quantifying the Effects of Text Duplication on Semantic Models.** Alexandra Schofield, Laure Thompson, and David Mimno. EMNLP 2017 PDF

**The Strange Geometry of Skip-Gram with Negative Sampling.** David Mimno and Laure Thompson. EMNLP 2017 PDF (best paper honorable mention)

Most people assume that word embedding vectors are determined by semantics. In fact, in the popular SGNS algorithm popularized by the negative sampling objective dominates, resulting in vectors that lie within a narrow cone.

**Applications of Topic Models.** Jordan Boyd-Graber, Yuening Hu, and David Mimno. Foundations and Trends in Information Retrieval, Now Publishers. 2017.

**Comparing Grounded Theory and Topic Modeling: Extreme Divergence or Unlikely Convergence?** Eric P. S. Baumer, David Mimno, Shion Guha, Emily Quan, Geri K. Gay. JASIST 68(6) 2017. PDF

We draw parallels between two methods for building high-level concepts that are grounded in documents. Both provide a useful perspective on the other.

**Pulling Out the Stops: Rethinking Stopword Removal for Topic Models.** Alexandra Schofield, Måns Magnusson, and David Mimno. EACL 2017 PDF

Removing the dozen or so most frequent words in a corpus seems to have a big effect on topic models. Beyond that, however, it's hard to tell the difference between models that had stopwords removed before and after training.

**The Tell-Tale Hat: Surfacing the Uncertainty in Folklore Classification.** Peter M. Broadwell, David Mimno and Timothy R. Tangherlini. Cultural Analytics, February 2017. HTML

**Cats and Captions vs. Creators and the Clock: Comparing Multimodal Content to Context in Predicting Relative Popularity.** Hessel, Jack, Lillian Lee, and David Mimno. WWW 2017

If you want to predict the popularity of online content, you need to consider social effects and the time of posting. In some cases a matter of seconds can be significant.

**Beyond Exchangeability: The Chinese Voting Process.** Lee, Moontae, Seok Hyun Jin, and David Mimno. NIPS 2016

Helpfulness votes in Amazon and StackExchange forums are sensitive to social pressure. Accounting for forum-specific biases helps identify over- and under-rated content, and quantifies forum culture.

**Comparing Apples to Apple: The Effects of Stemmers on Topic Models.** Alexandra Schofield and David Mimno. Transactions of the Association for Computational Linguistics 4 (2016): 287-300. PDF

Using a stemmer is unlikely to improve topic model results. If you're worried about displaying small variations of the same word over and over, use the stemmer *after* training to group words together for display.

**Machine Learning and Grounded Theory Method: Convergence, Divergence, and Combination**. Michael Muller, Shion Guha, Eric P. S. Baumer, David Mimno, and N. Sadat Shami. GROUP 2016. PDF

**Missing Photos, Suffering Withdrawal, or Finding Freedom? How Experiences of Social Media Non-Use Influence the Likelihood of Reversion**. Eric P. S. Baumer, Shion Guha, Emily Quan, David Mimno, Geri K. Gay. Social Media and Society. November 2015. HTML

**Posterior predictive checks to quantify lack-of-fit in admixture models of latent population structure.** David Mimno, David M Blei, Barbara E Engelhardt. Proceedings of the National Academy of Sciences. 112(26):E3341–50, 2015. PNASarXiv preprint

Admixture models represent the alleles in a genome as a combination of latent ancestral populations. We apply posterior predictive checks to evaluate the quality of fit with respect to functions of interest to population biologists.

**How Social Media Non-use Influences the Likelihood of Reversion: Self Control, Being Surveilled, Feeling Freedom, and Socially Connecting.** Eric P. S. Baumer, Shion Guha, Emily Quan, David Mimno, and Geri Gay. Social Media and Society, 2015.

**Robust Spectral Inference for Joint Stochastic Matrix Factorization.** Moontae Lee, David Bindel, and David Mimno. NIPS, 2015, Montreal, QC, Canada.

**Evaluation methods for unsupervised word embeddings.** Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. EMNLP, 2015, Lisbon, Portugal. PDFData

Word embeddings appear to represent meaning with geometry. But evaluating this property has been difficult. Getting humans to rate the semantic similarity of words is error-prone and time consuming. We use a simpler method to collect human judgments based on pairwise comparisons and odd-one-out detection tasks.

**Care and Feeding of Topic Models: Problems, Diagnostics, and Improvements** Jordan Boyd-Graber, David Mimno, and David Newman. In *Handbook of Mixed Membership Models and Their Applications*, CRC/Chapman Hall, 2014. PDF

A description of common pre-processing steps and model checking diagnostics.

**Low-dimensional Embeddings for Interpretable Anchor-based Topic Inference** Moontae Lee and David Mimno. EMNLP, 2014, Doha, Qatar. (Selected for oral presentation). PDF

In this work we trade an approximate solution to an exact problem for an exact solution to an approximation. We use a proven method in data visualization, the *t*-SNE projection, to compress a high-dimensional word co-occurrence space to a visualizable two- or three-dimensional space, and then find an exact convex hull. The corners of this convex hull become the anchor words for topics. We find better topics with more salient anchor words, while also improving the interpretability of the algorithm.

**Significant Themes in 19th-Century Literature** Matthew L. Jockers, David Mimno. *Poetics*, Dec 2013. Preprint

Models of literature are usually used for exploratory data analysis, but they can also be used to evaluate specific conjectures. We use permutation tests, bootstrap tests, and posterior predictive checks to test some hypotheses about associations between gender, anonymity, and literary themes.

**Random Projections for Anchor-based Topic Inference** David Mimno. NIPS workshop on Randomized Methods, 2013. PDF

Random projections allow us to scale anchor-based topic finding algorithms to large vocabularies. Projections with structured sparsity, for example holding the number of non-zeros in each row of the random projection fixed, produces better results.

**A Practical Algorithm for Topic Modeling with Provable Guarantees** Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, Michael Zhu. ICML, 2013, Atlanta, GA. (Selected for long-form presentation). PDF

Spectral algorithms for LDA have been useful for proving bounds on learning, but it hasn't been clear that they are useful. This paper presents an algorithm that both maintains theoretical guarantees and also provides extremely fast inference. We compared this new algorithm directly to standard MCMC methods on a number of metrics for synthetic and real data.

**Scalable Inference of Overlapping Communities** Prem Gopalan, David Mimno, Sean Gerrish, Michael J. Freedman, David Blei. NIPS, 2012, Lake Tahoe, NV. (Selected for spotlight presentation)

**Sparse stochastic inference for latent Dirichlet allocation** David Mimno, Matthew Hoffman and David Blei. ICML, 2012, Edinburgh, Scotland. (Selected for long-form presentation). PDF

Gibbs sampling can be fast if data is sparse, but doesn't scale because it requires us to keep a state variable for every data point. Online stochastic inference can be fast and uses constant memory, but doesn't scale because it can't leverage sparsity. We present a method that uses Gibbs sampling in the local step of a stochastic variational algorithm. The resulting method can process a 33 billion word corpus of 1.2 million books with thousands of topics on a single CPU.

**Computational Historiography: Data Mining in a Century of Classics Journals** David Mimno. ACM J. of Computing in Cultural Heritage. 5, 1, Article 3 (April 2012), 19 pages. PDF

**Topic Models for Taxonomies** Anton Bakalov, Andrew McCallum, Hanna Wallach, and David Mimno. Joint Conference on Digital Libraries (JCDL) 2012, Washington, DC. PDF

**Database of NIH grants using machine-learned categories and graphical clustering** Edmund M Talley, David Newman, David Mimno, Bruce W Herr II, Hanna M Wallach, Gully A P C Burns, A G Miriam Leenders and Andrew McCallum, Nature Methods, Volume 8(7), June 2011, pp. 443--444. HTMLPDF

What does the NIH fund, and how are scientific disciplines divided between institutes? In this paper we created a visualization of 100,000 accepted proposals and the 200,000 journal publications associated with those grants.

**Reconstructing Pompeian Households** David Mimno. UAI, 2011, Barcelona, Spain. (selected for oral presentation) PDF

Houses in Pompeii have several architecturally distinct types of rooms, but it's not always clear what the function of these rooms was, or even if there was a consistent pattern of use across different houses. This work uses statistical models to predict the artifacts found in different rooms.

**Bayesian Checking for Topic Models** David Mimno, David Blei. EMNLP, 2011, Edinburgh, Scotland. (selected for oral presentation) PDF

This paper measures the degree to which data fits the assumptions of topic models. Topic models represent documents as mixtures of simple, static multinomial distributions over words. We know that real documents exhibit "burstiness": when a word occurs in a document, it tends to occur many times. In this paper, we use a method from Bayesian model checking, *posterior predictive checks*, to measure the difference between the burstiness we observed and the expectation of the model. We then use this method to search for clusterings of documents based on time or other observed groupings, that best explain the observed burstiness.

**Optimizing Semantic Coherence in Topic Models** David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, Andrew McCallum. EMNLP, 2011, Edinburgh, Scotland. (selected for oral presentation) PDF

We introduce a metric for detecting semantic errors in topic models and develop a completely unsupervised model that specifically tries to improve this metric. Topic models provide a useful method for organizing large document collections into a small number of meaningful word clusters. In practice, however, many topics contain obvious semantic errors that may not reduce predictive power, but significantly weaken user confidence. We find that measuring the probability that lower-ranked words in a topic co-occur in documents with higher-ranked words beats all current methods for detecting a large class of low-quality topics.

**Measuring Confidence in Temporal Topic Models with Posterior Predictive Checks** David Mimno, David Blei. NIPS Workshop on Computational Social Science and the Wisdom of Crowds, 2010, Whistler, BC.

**Rethinking LDA: Why Priors Matter** Hanna Wallach, David Mimno and Andrew McCallum. NIPS, 2009, Vancouver, BC. PDFSupplementary Material

Empirically, we have found that optimizing Dirichlet hyperparameters for document-topic distributions in topic models makes a huge difference: topics are not dominated by very common words and topics are more stable as the number of topics increases. In this paper we explore the effects of Dirichlet priors on topic models. The best structure seems to be an asymmetric prior over document-topic distributions and a symmetric prior over topic-word distributions, currently implemented in the MALLET toolkit.

**Reconstructing Pompeian Households** David Mimno. Applications of Topic Models Workshop, NIPS 2009, Whistler, BC. House dataArtifact dataPDF (selected for oral presentation)

Pompeii provides a unique view into daily life in a Roman city, but the evidence is noisy and incomplete. This work applies statistical data mining methods originating in text analysis to a database of artifacts found in 30 houses in Pompeii.

**Polylingual Topic Models** David Mimno, Hanna Wallach, Jason Naradowsky, David Smith and Andrew McCallum. EMNLP, 2009, Singapore. PDF

Standard statistical topic models do not handle multiple languages well, but many important corpora -- particularly outside scientific publications -- contain a mix of many languages. We show that with simple modifications, topic models can leverage not only direct translations but also comparable collections like Wikipedia articles. We demonstrate the system on European parliament proceedings in 12 languages and comparable Wikipedia articles in 14 languages. Code is available in the class in the MALLET toolkit.

**Evaluation Methods for Topic Models** Hanna Wallach, Iain Murray, Ruslan Salakhutdinov and David Mimno. ICML, 2009, Montreal, Quebec. PDF

Held-out likelihood experiments provide an important complement to task-specific evaluations in topic models. We evaluate several methods for calculating held-out likelihoods. Several previously used methods, especially the harmonic mean method, show poor accuracy and high variance compared to a "Chib-style" method and a particle filter-inspired method.

**Efficient Methods for Topic Model Inference on Streaming Document Collections** Limin Yao, David Mimno and Andrew McCallum. KDD, 2009, Paris, France. PDFslides on fast sampling

Statistical topic modeling has become popular in text processing, but remains computationally intensive. It is often impossible to run standard inference methods on collections because of limited space (eg large IR corpora) and time (eg streaming corpora). In this paper we evaluate a number of methods for lightweight online topic inference, based on models trained from computationally expensive offline processes. In addition, we present SparseLDA, a new data structure and algorithm for Gibbs sampling in multinomial mixture models (such as LDA) that offers substantial improvements in speed and memory usage. A parallelized version of this algorithm is implemented in MALLET.

Error: in section 3.4, the statement "The constant *s* only changes when we update the hyperparameters α" is incorrect, as the number of words in the old topic and the new topic change by one. In fact, *s* must be updated before and after sampling a topic for each token, but this update takes a constant number of operations, regardless of the number of topics. This problem was only in the paper — the MALLET implementation has always been correct.

**Polylingual Topic Models** David Mimno, Hanna Wallach, Limin Yao, Jason Naradowsky and Andrew McCallum. Snowbird Learning Workshop, 2009, Clearwater, FL.

**Classics in the Million Book Library** Gregory Crane, Alison Babeu, David Bamman, Thomas Breuel, Lisa Cerrato, Daniel Deckers, Anke Lüdeling, David Mimno, Rashmi Singhal, David A. Smith, Amir Zeldes. Digital Humanities Quarterly 3(1), Winter 2009. HTML

In October 2008, Google announced a settlement that will provide access to seven million scanned books while the number of books freely available under an open license from the Internet Archive exceeded one million. The collections and services that classicists have created over the past generation place them in a strategic position to exploit the potential of these collections. This paper concludes with research topics relevant to all humanists on converting page images to text, one language to another, and raw text into machine actionable data.

**Gibbs Sampling for Logistic Normal Topic Models with Graph-Based Priors** David Mimno, Hanna Wallach and Andrew McCallum. NIPS Workshop on Analyzing Graphs, 2008, Whistler, BC. (one of five out of 22 papers selected for oral presentation) PDF

Dirichlet distributions are a mathematically tractable prior distribution for mixing proportions in Bayesian mixture models, but their convenience comes at the cost of flexibility and expressiveness. Previous work has suggested alternative priors such as logistic normal distributions, extending topic mixture models with covariance matrices and dynamic linear models, but this work has been limited to variational approximations. This paper presents a method for simple, robust Gibbs sampling in logistic normal topic models using an auxiliary variable scheme. Using this method, we extend previous models over linear chains to Gaussian Markov random field priors with arbitrarily structured graphs.

**Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression** David Mimno and Andrew McCallum. UAI, 2008 (selected for plenary presentation) PDFdata

Text documents are usually accompanied by metadata, such as the authors, the publication venue, the date, and any references. Work in topic modeling that has taken such information into account, such as Author-Topic, Citation-Topic, and Topic-over-Time models, has generally focused on constructing specific models that are suited only for one particular type of metadata. This paper presents a simple, unified model for learning topics from documents given arbitrary non-textual features, which can be discrete, categorical, or continuous.

**Modeling Career Path Trajectories** David Mimno and Andrew McCallum. University of Massachusetts, Amherst Technical Report #2007-69, 2007. PDF

Descriptions of previous work experience in resumes are a valuable source of information about the structure of the job market and the economy. There is, however, a high degree of variability in these documents. Job titles are a particular problem, as they are often either overly sparse or overly general: 85% of job titles in our corpus occur only once, while the most common titles, such as "Consultant", are so broad as to be virtually meaningless. We use a hierarchical hidden state model to discover clusters of words that correspond to distinct skills, clusters of skills that correspond to jobs, and transition patterns between jobs.

**Community-based Link Prediction with Text** David Mimno, Hanna Wallach, and Andrew McCallum. Statistical Network Modeling Workshop, NIPS, 2007, Whistler, BC.

**Expertise Modeling for Matching Papers with Reviewers** David Mimno and Andrew McCallum. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) 2007, San Jose, CA. PDFData

Science depends on peer review, but matching papers with reviewers is a challenging and time consuming task. We compare several automatic methods for measuring the similarity between a submitted abstract and papers previously written by reviewers. These include a novel topic model that automatically divides an author's papers into topically coherent "personas".

**Probabilistic Representations for Integrating Unreliable Data Sources** David Mimno, Andrew McCallum and Gerome Miklau. IIWeb workshop at AAAI 2007, Vancouver, BC, Canada. PDF

** Mixtures of Hierarchical Topics with Pachinko Allocation. ** David Mimno, Wei Li and Andrew McCallum. International Conference on Machine Learning (ICML) 2007, Corvallis, OR. PDF

The four-level pachinko allocation model (PAM) (Li & McCallum, 2006) represents correlations among topics using a DAG structure. It does not, however, represent a nested hierarchy of topics, with some topical word distributions representing the vocabulary that is shared among several more specic topics. This paper presents hierarchical PAM — an enhancement that explicitly represents a topic hierarchy. This model can be seen as combining the advantages of hLDA's topical hierarchy representation with PAM's ability to mix multiple leaves of the topic hierarchy. Experimental results show improvements in likelihood of held-out documents, as well as mutual information between automatically-discovered topics and human-generated categories such as journals.

** Mining a digital library for influential authors. ** David Mimno and Andrew McCallum. Joint Conference on Digital Libraries (JCDL) 2007, Vancouver, BC, Canada. PDF

Most digital libraries let you search for documents, but we often want to search for people as well. We extract and disambiguate author names from online research papers, weight papers using PageRank on the citation graph, and expand queries using a topic model. We evaluate the system by comparing people returned for the query "information retrieval" to recipients of major awards in IR.

** Organizing the OCA: Learning faceted subjects from a library of digital books. ** David Mimno and Andrew McCallum. Joint Conference on Digital Libraries (JCDL) 2007, Vancouver, BC, Canada. PDF

The Open Content Alliance is one of several large-scale digitization projects currently producing huge numbers of digital books. Statistical topic models are a natural choice for organizing and describing such large text corpora, but scalability becomes a problem when we are dealing with multi-billion word corpora. This paper presents a new method for topic modeling, DCM-LDA. In this model, we train an independent topic model for every book, using pages as "documents". We then gather the topics discovered, cluster them, and then fit a Dirichlet prior for each topic cluster. Finally, we retrain the individual book topic models using these new shared topics.

** Beyond Digital Incunabula: Modeling the Next Generation of Digital Libraries. ** Gregory Crane, David Bamman, Lisa Cerrato, Alison Jones, David Mimno, Adrian Packel, D. Sculley, and Gabriel Weaver. European Conference on Digital Libraries (ECDL) 2006, Alicante, Spain. PDF

Several groups are currently embarking on large scale digitization projects, but are they producing anything more than lots of raw text? This paper argues that such an investment in digitization will be more valuable if accompanied by a parallel investment in highly structured resources such as dictionaries. Several examples, including some I worked on while at Perseus, illustrate this effect.

** Bibliometric Impact Measures Leveraging Topic Analysis. ** Gideon Mann, David Mimno and Andrew McCallum. Joint Conference on Digital Libraries (JCDL) 2006, Chapel Hill, NC. PDFPowerpoint

When evaluating the impact of research papers, it's important to compare similar papers: a massively influential paper in Mathematics may be as well cited as a middling paper in Molecular Biology. We present a system that combines automatic citation analysis on spidered research papers with a new automatic topic model that is aware of multi-word terms. This system is capable of finding fine-grained sub-fields while scaling to the exponential increase in open-access publishing. We evaluate papers from the Rexa digital library using both traditional bibliometric statistics (substituting topics for journals) as well as several new metrics.

** Hierarchical Catalog Records: Implementing a FRBR Catalog. ** David Mimno, Alison Jones and Gregory Crane. DLib, October 2005. HTML

** Finding a Catalog: Generating Analytical Catalog Records from Well-structured Digital Texts. ** David Mimno, Alison Jones and Gregory Crane. Joint Conference on Digital Libraries (JCDL) 2005, Denver, CO. PDF.

** Services for a Customizable Authority Linking Environment. ** Mark Patton and David Mimno. demonstration at Joint Conference on Digital Libraries (JCDL) 2004, Tucson, AZ.

**Towards a Cultural Heritage Digital Library.** Gregory Crane, Clifford E. Wulfman, Lisa M. Cerrato, Anne Mahoney, Thomas L. Milbank, David Mimno, Jeffrey A. Rydberg-Cox, David A. Smith, and Christopher York. Joint Conference on Digital Libraries (JCDL) 2003, Houston, TX.

Show All (showing )

Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, Eric P. Xing. Mixed Membership Stochastic Blockmodels. JMLR (9) 2008 pp. 1981-2014.

Networks

[BibTeX]

@article{airoldi2008mixed, author={Edoardo M. Airoldi and David M. Blei and Stephen E. Fienberg and Eric P. Xing}, title={Mixed Membership Stochastic Blockmodels}, journal={JMLR}, year={2008}, volume={9}, pages={1981-2014}, }Loulwah AlSumait, Daniel Barbará, James Gentle, Carlotta Domeniconi. Topic Significance Ranking of LDA Generative Models. ECML (2009).

Evaluation

[BibTeX]

@inproceedings{alsumait2009topic, author={Loulwah AlSumait and Daniel Barbará and James Gentle and Carlotta Domeniconi}, title={Topic Significance Ranking of LDA Generative Models}, booktitle={ECML}, year={2009}, url={http://www.springerlink.com/content/v3jth868647716kg/}, }David Andrzejewski, Anne Mulhern, Ben Liblit, Xiaojin Zhu. Statistical Debugging using Latent Topic Models. ECML (2007).

[BibTeX]

@inproceedings{andrzejewski2007statistical, author={David Andrzejewski and Anne Mulhern and Ben Liblit and Xiaojin Zhu}, title={Statistical Debugging using Latent Topic Models}, booktitle={ECML}, year={2007}, }David Andrzejewski, Xiaojin Zhu, Mark Craven. Incorporating domain knowledge into topic modeling via Dirichlet Forest priors. ICML (2009).

[BibTeX]

@inproceedings{andrzejewski2009incorporating, author={David Andrzejewski and Xiaojin Zhu and Mark Craven}, title={Incorporating domain knowledge into topic modeling via Dirichlet Forest priors}, booktitle={ICML}, year={2009}, pages={25-32}, }David Andrzejewski, Xiaojin Zhu, Mark Craven, Ben Recht. A Framework for Incorporating General Domain Knowledge into Latent Dirichlet Allocation using First-Order Logic. IJCAI (2011).

[BibTeX]

@inproceedings{andrzejewski2011framework, author={David Andrzejewski and Xiaojin Zhu and Mark Craven and Ben Recht}, title={A Framework for Incorporating General Domain Knowledge into Latent Dirichlet Allocation using First-Order Logic}, booktitle={IJCAI}, year={2011}, }Arthur Asuncion, Padhraic Smyth, Max Welling. Asynchronous Distributed Learning of Topic Models. NIPS (2008).

Scalability

[BibTeX]

@inproceedings{asuncion2008distributed, author={Arthur Asuncion and Padhraic Smyth and Max Welling}, title={Asynchronous Distributed Learning of Topic Models}, booktitle={NIPS}, year={2008}, pages={81-88}, url={http://www.ics.uci.edu/~asuncion/pubs/NIPS_08.pdf}, }Arthur Asuncion, Max Welling, Padhraic Smyth, Yee-Whye Teh. On Smoothing and Inference for Topic Models. UAI (2009).

Inference

[BibTeX]

@inproceedings{asuncion2009smoothing, author={Arthur Asuncion and Max Welling and Padhraic Smyth and Yee-Whye Teh}, title={On Smoothing and Inference for Topic Models}, booktitle={UAI}, year={2009}, url={http://www.ics.uci.edu/~asuncion/pubs/UAI_09.pdf}, }A dense but excellent review of inference in topic models. Introduces CVB0, a method for collapsed variational inference surprisingly similar to Gibbs sampling.

David Blei, Michael Jordan. Modeling Annotated Data. SIGIR (2003).

[BibTeX]

@inproceedings{blei2003modeling, author={David Blei and Michael Jordan}, title={Modeling Annotated Data}, booktitle={SIGIR}, year={2003}, }This paper introduces CorrLDA for data that consists of text and images, where image "topics" are chosen only from topics that are assigned to the text in the same document.

David M. Blei. lda-c. (2003).

Implementations

[BibTeX]

@misc{blei-lda-c, author={David M. Blei}, title={lda-c}, year={2003}, url={http://www.cs.princeton.edu/~blei/lda-c/}, }lda-c implements LDA with variational inference in C.

David M. Blei, Andrew Ng, Michael Jordan. Latent Dirichlet allocation. JMLR (3) 2003 pp. 993-1022.

[BibTeX]

@article{blei2003latent, author={David M. Blei and Andrew Ng and Michael Jordan}, title={Latent Dirichlet allocation}, journal={JMLR}, year={2003}, volume={3}, pages={993-1022}, }David M. Blei, Thomas Griffiths, Michael Jordan, Joshua Tenenbaum. Hierarchical topic models and the nested Chinese restaurant process. NIPS (2003).

Non-parametric

[BibTeX]

@inproceedings{blei2003hierarchical, author={David M. Blei and Thomas Griffiths and Michael Jordan and Joshua Tenenbaum}, title={Hierarchical topic models and the nested Chinese restaurant process}, booktitle={NIPS}, year={2003}, url={http://books.nips.cc/papers/files/nips16/NIPS2003_AA03.pdf}, }Introduces hLDA, which models topics in a tree. Each document is generated by topics along a single path through the tree.

David M. Blei, Thomas L. Griffiths, Michael I. Jordan. The nested Chinese restaurant process and hierarchical topic models. (2007).

Non-parametric

[BibTeX][Abstract]

@misc{blei2007nested, author={David M. Blei and Thomas L. Griffiths and Michael I. Jordan}, title={The nested Chinese restaurant process and hierarchical topic models}, year={2007}, url={http://arxiv.org/abs/0710.0845}, }We present the nested Chinese restaurant process (nCRP), a stochastic process which assigns probability distributions to infinitely-deep, infinitely-branching trees. We show how this stochastic process can be used as a prior distribution in a nonparametric Bayesian model of document collections. Specifically, we present an application to information retrieval in which documents are modeled as paths down a random tree, and the preferential attachment dynamics of the nCRP leads to clustering of documents according to sharing of topics at multiple levels of abstraction. Given a corpus of documents, a posterior inference algorithm finds an approximation to a posterior distribution over trees, topics and allocations of words to levels of the tree. We demonstrate this algorithm on several collections of scientific abstracts. This model exemplifies a recent trend in statistical machine learning-the use of nonparametric Bayesian methods to infer distributions on flexible data structures.

This is a longer version of Blei et al. 2004, which extends that paper's hLDA model to trees of unlimited depth.

David M. Blei, John D. Lafferty. Dynamic Topic Models. ICML (2006).

Temporal

[BibTeX]

@inproceedings{blei2006dynamic, author={David M. Blei and John D. Lafferty}, title={Dynamic Topic Models}, booktitle={ICML}, year={2006}, url={http://portal.acm.org/citation.cfm?id=1143859}, }David M. Blei, John D. Lafferty. A Correlated Topic model of Science. AAS (1) 2007 pp. 17-35.

[BibTeX]

@article{blei2007correlated, author={David M. Blei and John D. Lafferty}, title={A Correlated Topic model of Science}, journal={AAS}, year={2007}, volume={1}, number={1}, pages={17-35}, }David M. Blei, Jon D. McAuliffe. Supervised Topic Models. NIPS (2007).

[BibTeX]

@inproceedings{blei2007supervised, author={David M. Blei and Jon D. McAuliffe}, title={Supervised Topic Models}, booktitle={NIPS}, year={2007}, url={http://books.nips.cc/papers/files/nips20/NIPS2007_0893.pdf}, }David M. Blei. Introduction to Probabilistic Topic Models. Communications of the ACM () 2011 pp. .

Where to start

[BibTeX]

@article{blei2011introduction, author={David M. Blei}, title={Introduction to Probabilistic Topic Models}, journal={Communications of the ACM}, year={2011}, url={http://www.cs.princeton.edu/~blei/papers/Blei2011.pdf}, }A high-level overview of probabilistic topic models.

Brad Block. Collapsed variational HDP. (2011).

Implementations

[BibTeX]

@misc{block2011cvhdp, author={Brad Block}, title={Collapsed variational HDP}, year={2011}, url={http://www.bradblock.com/tm-0.1.tar.gz}, }This library contains Java source and class files implementing the Latent Dirichlet Allocation (single-threaded collapsed Gibbs sampling) and Hierarchical Dirichlet Process (multi-threaded collapsed variational inference) topic models. The models can be accessed through the command-line or through a simple Java API. Also included is a subset of the 20 Newsgroup dataset and results of experiments done on the dataset to confirm the correct operation and investigate some properties of the topic models. No third-party scientific libraries are required and all needed special functions are implemented and included.

Jordan Boyd-Graber, David M. Blei, Xiaojin Zhu. A Topic Model for Word Sense Disambiguation. EMNLP (2007).

NLP

[BibTeX]

@inproceedings{boydgraber2007topic, author={Jordan Boyd-Graber and David M. Blei and Xiaojin Zhu}, title={A Topic Model for Word Sense Disambiguation}, booktitle={EMNLP}, year={2007}, }Jordan Boyd-Graber, David M. Blei. PUTOP: Turning Predominant Senses into a Topic Model for WSD. SEMEVAL (2007).

NLP

[BibTeX]

@inproceedings{boydgraber2007turning, author={Jordan Boyd-Graber and David M. Blei}, title={PUTOP: Turning Predominant Senses into a Topic Model for WSD}, booktitle={SEMEVAL}, year={2007}, }Jordan Boyd-Graber, David M. Blei. Syntactic Topic Models. NIPS (2008).

NLP

[BibTeX]

@inproceedings{boydgraber2008syntactic, author={Jordan Boyd-Graber and David M. Blei}, title={Syntactic Topic Models}, booktitle={NIPS}, year={2008}, url={http://books.nips.cc/papers/files/nips21/NIPS2008_0319.pdf}, }Jordan Boyd-Graber, David M. Blei. Multilingual Topic Models for Unaligned Text. UAI (2009).

Cross-language

[BibTeX]

@inproceedings{boydgraber2009multilingual, author={Jordan Boyd-Graber and David M. Blei}, title={Multilingual Topic Models for Unaligned Text}, booktitle={UAI}, year={2009}, }David A. Broniatowski, Christopher L. Magee. Analysis of Social Dynamics on FDA Panels Using Social Networks Extracted From Meeting Transcripts. SocCom (2010).

Networks

[BibTeX]

@inproceedings{broniatowskimagee2010, author={David A. Broniatowski and Christopher L. Magee}, title={Analysis of Social Dynamics on FDA Panels Using Social Networks Extracted From Meeting Transcripts}, booktitle={SocCom}, year={2010}, url={http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5591237&tag=1}, }Method for analyzing group decision making based on the Author-Topic Model

David A. Broniatowski, Christopher L. Magee. Towards A Computational Analysis of Status and Leadership Styles on FDA Panels. SBP (2011).

NetworksTemporal

[BibTeX]

@inproceedings{broniatowskimagee2011, author={David A. Broniatowski and Christopher L. Magee}, title={Towards A Computational Analysis of Status and Leadership Styles on FDA Panels}, booktitle={SBP}, year={2011}, url={http://www.springerlink.com/content/w655v786lp583660/}, }Incorporates temporal information to generate directed graphs based upon topic models

Wray L. Buntine. Discrete Component Analysis. (2009).

Implementations

[BibTeX]

@misc{buntine-dca, author={Wray L. Buntine}, title={Discrete Component Analysis}, year={2009}, url={http://www.nicta.com.au/people/buntinew/discrete_component_analysis}, }C implementation of LDA and multinomial PCA.

Wray L. Buntine, Aleks Jakulin. Discrete Component Analysis. SLSFS (2005).

[BibTeX]

@inproceedings{buntine2005discrete, author={Wray L. Buntine and Aleks Jakulin}, title={Discrete Component Analysis}, booktitle={SLSFS}, year={2005}, pages={1-33}, }Wray L. Buntine. Estimating Likelihoods for Topic Models. Asian Conference on Machine Learning (2009).

Evaluation

[BibTeX]

@inproceedings{buntine2009estimating, author={Wray L. Buntine}, title={Estimating Likelihoods for Topic Models}, booktitle={Asian Conference on Machine Learning}, year={2009}, url={http://www.nicta.com.au/__data/assets/pdf_file/0019/20746/sdca-0202.pdf}, }Provides improved versions of some of the methods in Wallach et al. (2009) for calculating held-out probability.

Wray L. Buntine, Swapnil Mishra. Experiments with Non-parametric Topic Models. (2014).

Non-parametric

[BibTeX]

@misc{buntine2014experiments, author={Wray L. Buntine and Swapnil Mishra}, title={Experiments with Non-parametric Topic Models}, year={2014}, url={http://dl.acm.org/citation.cfm?id=2623691}, }Non-parametric implementations of bursty models. The authors find that using fixed numbers of topics but optimizing hyperparameters provides a good approximation of a non-parametric HDP model.

Jun Fu Cai, Wee Sun Lee, Yee Whye Teh. NUS-ML: Improving Word Sense Disambiguation Using Topic Features. SEMEVAL (2007).

NLP

[BibTeX]

@inproceedings{cai2007nus, author={Jun Fu Cai and Wee Sun Lee and Yee Whye Teh}, title={NUS-ML: Improving Word Sense Disambiguation Using Topic Features}, booktitle={SEMEVAL}, year={2007}, }Jonathan Chang. R package 'lda'. (2011).

Implementations

[BibTeX]

@misc{r-lda, author={Jonathan Chang}, title={R package 'lda'}, year={2011}, url={http://cran.r-project.org/web/packages/lda/}, }This package implements latent Dirichlet allocation (LDA) and related models. This includes (but is not limited to) sLDA, corrLDA, and the mixed-membership stochastic blockmodel. Inference for all of these models is implemented via a fast collapsed Gibbs sampler writtten in C. Utility functions for reading/writing data typically used in topic models, as well as tools for examining posterior distributions are also included.

Jonathan Chang, David Blei. Relational Topic Models for Document Networks. AIStats (2009).

Networks

[BibTeX]

@inproceedings{chang2009relational, author={Jonathan Chang and David Blei}, title={Relational Topic Models for Document Networks}, booktitle={AIStats}, year={2009}, }Chaitanya Chemudugunta, Padhraic Smyth, Mark Steyvers. Modeling General and Speciﬁc Aspects of Documents with a Probabilistic Topic Model. NIPS (2006).

[BibTeX]

@inproceedings{chemudugunta2006modeling, author={Chaitanya Chemudugunta and Padhraic Smyth and Mark Steyvers}, title={Modeling General and Speciﬁc Aspects of Documents with a Probabilistic Topic Model}, booktitle={NIPS}, year={2006}, url={http://www.datalab.uci.edu/papers/special_words_NIPS06.pdf}, }This paper has two interesting extensions to LDA that account for the power-law distribution of word frequencies in real documents. First, a general "background" distribution represents common words. Second, a "special words" model allows each document to have some unique words.

Changyou Chen, Lan Du, Wray Buntine. Sampling Table Configurations for the Hierarchical Poisson-Dirichlet Process. ECML-PKDD (2011).

Non-parametric

[BibTeX]

@inproceedings{chen2011sampling, author={Changyou Chen and Lan Du and Wray Buntine}, title={Sampling Table Configurations for the Hierarchical Poisson-Dirichlet Process}, booktitle={ECML-PKDD}, year={2011}, url={http://www.nicta.com.au/pub?doc=4806}, }A simple hierarchical Pitman-Yor LDA sampler that does not record "table" assignments. Perplexity is sometimes far superior to other methods.

Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, David M. Blei. Reading Tea Leaves: How Humans Interpret Topic Models. NIPS (2009).

Evaluation

[BibTeX]

@inproceedings{chang2009reading, author={Jonathan Chang and Jordan Boyd-Graber and Chong Wang and Sean Gerrish and David M. Blei}, title={Reading Tea Leaves: How Humans Interpret Topic Models}, booktitle={NIPS}, year={2009}, url={http://books.nips.cc/papers/files/nips22/NIPS2009_0125.pdf}, }Pradipto Das, Rohini Srihari, Yun Fu. Simultaneous Joint and Conditional Modeling of Documents Tagged from Two Perspectives. (2011).

[BibTeX]

@inproceedings{das2011simultaneous, author={Pradipto Das and Rohini Srihari and Yun Fu}, title={Simultaneous Joint and Conditional Modeling of Documents Tagged from Two Perspectives}, booktitle={}, year={2011}, url={http://www.acsu.buffalo.edu/~pdas3/research/papers/CIKM/pdasCIKM11.pdf}, }Rajarshi Das, Manzil Zaheer, Chris Dyer. Gaussian LDA for topic Models with Word Embeddings. (2015).

[BibTeX]

@inproceedings{das2015gaussian, author={Rajarshi Das and Manzil Zaheer and Chris Dyer}, title={Gaussian LDA for topic Models with Word Embeddings}, booktitle={}, year={2015}, url={http://rajarshd.github.io/papers/acl2015.pdf}, }Hal Daumé III. Markov Random Topic Fields. (2009).

[BibTeX]

@inproceedings{daume2009markov, author={Hal Daumé III}, title={Markov Random Topic Fields}, booktitle={}, year={2009}, }Andrew M. Dai, Amos J. Storkey. The Grouped Author-Topic Model for Unsupervised Entity Resolution . ICANN (2011).

[BibTeX]

@inproceedings{dai2011grouped, author={Andrew M. Dai and Amos J. Storkey}, title={ The Grouped Author-Topic Model for Unsupervised Entity Resolution }, booktitle={ICANN}, year={2011}, }Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Richard Harshman. Indexing by Latent Semantic Analysis. JASIS (41) 1990 pp. 391-407.

[BibTeX]

@article{deerwester1990indexing, author={Scott Deerwester and Susan T. Dumais and George W. Furnas and Thomas K. Landauer and Richard Harshman}, title={Indexing by Latent Semantic Analysis}, journal={JASIS}, year={1990}, volume={41}, number={6}, pages={391-407}, }Laura Dietz, Steffen Bickel, Tobias Scheffer. Unsupervised prediction of citation influences. ICML (2007).

NetworksBibliometrics

[BibTeX]

@inproceedings{dietz2007unsupervised, author={Laura Dietz and Steffen Bickel and Tobias Scheffer}, title={Unsupervised prediction of citation influences}, booktitle={ICML}, year={2007}, }Chris Ding, Tao Li, Wei Peng. On the Equivalence between Non-negative Matrix Factorization and Probabilistic Latent Semantic Indexing. Computational Statistics and Data Analysis (52) 2008 pp. 3913-3927.

Theory

[BibTeX]

@article{ding2008equivalence, author={Chris Ding and Tao Li and Wei Peng}, title={On the Equivalence between Non-negative Matrix Factorization and Probabilistic Latent Semantic Indexing}, journal={Computational Statistics and Data Analysis}, year={2008}, volume={52}, pages={3913-3927}, }Gabriel Doyle, Charles Elkan. Accounting for Burstiness in Topic Models. ICML (2009).

[BibTeX]

@inproceedings{doyle2009accounting, author={Gabriel Doyle and Charles Elkan}, title={Accounting for Burstiness in Topic Models}, booktitle={ICML}, year={2009}, url={http://www.cs.utah.edu/~hal/tmp/icml/papers/162.pdf}, }Replaces the standard multinomial distribution over topics with a Dirichlet-compound Multinomial (DCM).

Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, Eric P. Xing. A Latent Variable Model for Geographic Lexical Variation. EMNLP (2010).

[BibTeX]

@inproceedings{eisenstein2010latent, author={Jacob Eisenstein and Brendan O'Connor and Noah A. Smith and Eric P. Xing}, title={A Latent Variable Model for Geographic Lexical Variation}, booktitle={EMNLP}, year={2010}, url={http://www.cc.gatech.edu/~jeisenst/papers/emnlp2010.pdf}, }The widely-reported Twitter dialects paper. Topics combine a word distribution with a bivariate normal over latitude and longitude.

Jacob Eisenstein, Amr Ahmed, Eric P. Xing. Sparse Additive Generative Models of Text. ICML (2011).

[BibTeX]

@inproceedings{eisenstein2011sparse, author={Jacob Eisenstein and Amr Ahmed and Eric P. Xing}, title={Sparse Additive Generative Models of Text}, booktitle={ICML}, year={2011}, url={http://www.cc.gatech.edu/~jeisenst/papers/icml2011.pdf}, }Presents a new generative model of text, based on the principle of sparse deviation from a background word distribution. This approach proves effective in supervised, unsupervised, and latent variable settings.

Elena Erosheva, Stephen Fienberg, John Lafferty. Mixed Membership Models of Scientific Publications. PNAS (101) 2004 pp. 5220-5227.

Bibliometrics

[BibTeX]

@article{erosheva2004mixed, author={Elena Erosheva and Stephen Fienberg and John Lafferty}, title={Mixed Membership Models of Scientific Publications}, journal={PNAS}, year={2004}, volume={101}, number={Suppl. 1}, pages={5220-5227}, }James R. Foulds, L. Boyles, C. DuBois, Padhraic Smyth, Max Welling. Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation. KDD (2013).

Inference

[BibTeX]

@inproceedings{foulds2013stochastic, author={James R. Foulds and L. Boyles and C. DuBois and Padhraic Smyth and Max Welling}, title={Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation}, booktitle={KDD}, year={2013}, }Radim Řehůřek. gensim. (2009).

Implementations

[BibTeX]

@misc{gensim, author={Radim Řehůřek}, title={gensim}, year={2009}, url={http://nlp.fi.muni.cz/projekty/gensim/}, }Python package for topic modelling, includes distributed and online implementation of variational LDA.

Sean Gerrish, David M. Blei. A language-based approach to measuring scholarly impact. ICML (2010).

Bibliometrics

[BibTeX]

@inproceedings{gerrish2010language, author={Sean Gerrish and David M. Blei}, title={A language-based approach to measuring scholarly impact}, booktitle={ICML}, year={2010}, url={http://www.cs.princeton.edu/~blei/papers/GerrishBlei2010.pdf}, }Mark Girolami, Ata Kabán. On an equivalence between pLSI and LDA. SIGIR (2003).

Theory

[BibTeX]

@inproceedings{girolami2003on, author={Mark Girolami and Ata Kabán}, title={On an equivalence between pLSI and LDA}, booktitle={SIGIR}, year={2003}, pages={433-434}, }Andre Gohr, Myra Spiliopoulou, Alexander Hinneburg. Visually Summarizing the Evolution of Documents under a Social Tag. KDIR (2010).

Temporal

[BibTeX]

@inproceedings{gohr2010visually, author={Andre Gohr and Myra Spiliopoulou and Alexander Hinneburg}, title={Visually Summarizing the Evolution of Documents under a Social Tag}, booktitle={KDIR}, year={2010}, url={http://users.informatik.uni-halle.de/~hinnebur/PS_Files/kdir2010_TT.pdf}, }Andre Gohr, Alexander Hinneburg, Rene Schult, Myra Spiliopoulou. Topic Evolution in a Stream of Documents. SDM (2009).

Temporal

[BibTeX]

@inproceedings{gohr2009, author={Andre Gohr and Alexander Hinneburg and Rene Schult and Myra Spiliopoulou}, title={Topic Evolution in a Stream of Documents}, booktitle={SDM}, year={2009}, pages={859-870}, url={http://users.informatik.uni-halle.de/~hinnebur/PS_Files/sdm09_APLSA.pdf}, }Thomas L. Griffiths, Mark Steyvers. Finding Scientific Topics. PNAS (101) 2004 pp. 5228-5235.

[BibTeX]

@article{griffiths04finding, author={Thomas L. Griffiths and Mark Steyvers}, title={Finding Scientific Topics}, journal={PNAS}, year={2004}, volume={101}, number={suppl. 1}, pages={5228-5235}, }Thomas L. Griffiths, Mark Steyvers, David M. Blei, Joshua B. Tenenbaum. Integrating Topics and Syntax. In , NIPS (2004).

NLP

[BibTeX]

@incollection{griffiths2004integrating, author={Thomas L. Griffiths and Mark Steyvers and David M. Blei and Joshua B. Tenenbaum}, editor={}, title={Integrating Topics and Syntax}, booktitle={NIPS}, year={2004}, pages={537-544}, url={http://books.nips.cc/papers/files/nips17/NIPS2004_0642.pdf}, }David Hall, Daniel Jurafsky, Christopher D. Manning. Studying the History of Ideas Using Topic Models. EMNLP (2008).

Bibliometrics

[BibTeX]

@inproceedings{hall2008studying, author={David Hall and Daniel Jurafsky and Christopher D. Manning}, title={Studying the History of Ideas Using Topic Models}, booktitle={EMNLP}, year={2008}, pages={363-371}, }Gregor Heinrich. Parameter Estimation for Text Analysis. (2004).

Inference

[BibTeX][Abstract]

@techreport{heinrich2004parameter, author={Gregor Heinrich}, title={Parameter Estimation for Text Analysis}, year={2004}, url={http://www.arbylon.net/publications/text-est.pdf}, }Presents parameter estimation methods common with discrete probability distributions, which is of particular interest in text modeling. Starting with maximum likelihood, a posteriori and Bayesian estimation, central concepts like conjugate distributions and Bayesian networks are reviewed. As an application, the model of latent Dirichlet allocation (LDA) is explained in detail with a full derivation of an approximate inference algorithm based on Gibbs sampling, including a discussion of Dirichlet hyperparameter estimation.

Gregor Heinrich. A generic approach to topic models. ECML/PKDD (2009).

Scalability

[BibTeX]

@inproceedings{heinrich2009generic, author={Gregor Heinrich}, title={A generic approach to topic models}, booktitle={ECML/PKDD}, year={2009}, url={http://arbylon.net/publications/mixnet-gibbs.pdf}, }Gregor Heinrich. Infinite LDA. (2011).

ImplementationsNon-parametric

[BibTeX]

@misc{heinrich2011infinite, author={Gregor Heinrich}, title={Infinite LDA}, year={2011}, url={http://arbylon.net/projects/knowceans-ilda/knowceans-ilda.zip}, }A simple implementation of a non-parametric model, where the number of topics is not fixed in advance. Uses Teh's direct assignment method for HDP.

Alexander Hinneburg, Hans-Henning Gabriel, Andre Gohr. Bayesian Folding-In with Dirichlet Kernels for PLSI. ICDM (2007).

Theory

[BibTeX]

@inproceedings{hinneburg2007bayesian, author={Alexander Hinneburg and Hans-Henning Gabriel and Andre Gohr}, title={Bayesian Folding-In with Dirichlet Kernels for PLSI}, booktitle={ICDM}, year={2007}, pages={499-504}, url={http://users.informatik.uni-halle.de/~hinnebur/PS_Files/blsi_icdm07.pdf}, }Thomas Hofmann. Probilistic latent semantic analysis. UAI (1999).

[BibTeX]

@inproceedings{hofmann1999plsa, author={Thomas Hofmann}, title={Probilistic latent semantic analysis}, booktitle={UAI}, year={1999}, }Matthew Hoffman, David M. Blei, Francis Bach. Online Learning for Latent Dirichlet Allocation. NIPS (2010).

[BibTeX]

@inproceedings{hoffman2010online, author={Matthew Hoffman and David M. Blei and Francis Bach}, title={Online Learning for Latent Dirichlet Allocation}, booktitle={NIPS}, year={2010}, }Jagadeesh Jagarlamudi, Hal Daumé III. Extracting Multilingual Topics from Unaligned Comparable Corpora. (2010).

Cross-language

[BibTeX]

@inproceedings{jagarlamudi2010extracting, author={Jagadeesh Jagarlamudi and Hal Daumé III}, title={Extracting Multilingual Topics from Unaligned Comparable Corpora}, booktitle={}, year={2010}, url={http://dx.doi.org/10.1007/978-3-642-12275-0_39}, pages={444--456}, }Mark Johnson. PCFGs, Topic Models, Adaptor Grammars, and Learning Topical Collocations and the Structure of Proper Names. (2010).

NLP

[BibTeX]

@inproceedings{johnson2010pcfgs, author={Mark Johnson}, title={PCFGs, Topic Models, Adaptor Grammars, and Learning Topical Collocations and the Structure of Proper Names}, booktitle={}, year={2010}, }Jyri J. Kivinen, Erik B. Sudderth, Michael I. Jordan. Learning Multiscale Representations of Natural Scenes Using Dirichlet Processes. ICCV (2007).

Non-parametricVision

[BibTeX][Abstract]

@inproceedings{kivinen2007learning, author={Jyri J. Kivinen and Erik B. Sudderth and Michael I. Jordan}, title={Learning Multiscale Representations of Natural Scenes Using Dirichlet Processes}, booktitle={ICCV}, year={2007}, url={http://www.cs.berkeley.edu/~jordan/papers/kivinen-sudderth-jordan-iccv07.pdf}, }We develop nonparametric Bayesian models for multiscale representations of images depicting natural scene categories. Individual features or wavelet coefficients are marginally described by Dirichlet process (DP) mixtures, yielding the heavy-tailed marginal distributions characteristic of natural images. Dependencies between features are then captured with a hidden Markov tree, and Markov chain Monte Carlo methods used to learn models whose latent state space grows in complexity as more images are observed. By truncating the potentially infinite set of hidden states, we are able to exploit efficient belief propagation methods when learning these hierarchical Dirichlet process hidden Markov trees (HDP-HMTs) from data. We show that our generative models capture interesting qualitative structure in natural scenes, and more accurately categorize novel images than models which ignore spatial relationships among features.

The paper introduces a blocked Gibbs sampler for learning a nonparametric Bayesian topic model whose topic assignments are coupled with a tree-structured graphical model.

Simon Lacoste-Julien, Fei Sha, Michael I. Jordan. DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification. NIPS (2008).

[BibTeX]

@inproceedings{lacoste2008disclda, author={Simon Lacoste-Julien and Fei Sha and Michael I. Jordan}, title={DiscLDA: Discriminative Learning for Dimensionality Reduction and Classification}, booktitle={NIPS}, year={2008}, url={http://books.nips.cc/papers/files/nips21/NIPS2008_0993.pdf}, }Thomas K. Landauer, Susan T. Dumais. Solutions to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review () 1997 pp. .

[BibTeX]

@article{landauer1997solutions, author={Thomas K. Landauer and Susan T. Dumais}, title={Solutions to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge}, journal={Psychological Review}, year={1997}, number={104}, }John Langford. Vowpal Wabbit. (2011).

Implementations

[BibTeX]

@misc{vowpalwabbit, author={John Langford}, title={Vowpal Wabbit}, year={2011}, url={https://github.com/JohnLangford/vowpal_wabbit/wiki}, }VW includes an implementation of Hoffman et al.'s online variational LDA.

Wei Li, David Blei, Andrew McCallum. Nonparametric Bayes Pachinko Allocation. (2007).

Non-parametric

[BibTeX]

@techreport{li2007nonparametric, author={Wei Li and David Blei and Andrew McCallum}, title={Nonparametric Bayes Pachinko Allocation}, year={2007}, }Wei-Hao Lin, Eric P. Xing, Alexander Hauptmann. A Joint Topic and Perspective Model for Ideological Discourse. ECML PKDD (2008).

[BibTeX]

@inproceedings{lin2008joint, author={Wei-Hao Lin and Eric P. Xing and Alexander Hauptmann}, title={A Joint Topic and Perspective Model for Ideological Discourse}, booktitle={ECML PKDD}, year={2008}, pages={17-32}, url={http://portal.acm.org/citation.cfm?id=1431999.1432002}, }Rasmus Madsen, David Kauchak, Charles Elkan. Modeling Word Burstiness Using the Dirichlet Distribution. ICML (2005).

[BibTeX]

@inproceedings{madsen2005modeling, author={Rasmus Madsen and David Kauchak and Charles Elkan}, title={Modeling Word Burstiness Using the Dirichlet Distribution}, booktitle={ICML}, year={2005}, }Andrew Kachites McCallum. MALLET: A Machine Learning for Language Toolkit. (2002).

Implementations

[BibTeX]

@misc{mallet, author={Andrew Kachites McCallum}, title={MALLET: A Machine Learning for Language Toolkit}, year={2002}, url={http://mallet.cs.umass.edu}, }Implements Gibbs sampling for LDA in Java using fast sampling methods from Yao et al. MALLET also includes support for data preprocessing, classification, and sequence tagging.

Andrew McCallum, Andrés Corrada-Emmanuel, Xuerui Wang. Topic and Role Discovery in Social Networks. IJCAI (2005).

Networks

[BibTeX]

@inproceedings{mccallum2005topic, author={Andrew McCallum and Andrés Corrada-Emmanuel and Xuerui Wang}, title={Topic and Role Discovery in Social Networks}, booktitle={IJCAI}, year={2005}, }Rishabh Mehrotra, Scott Sanner, Wray Buntine, Lexing Xie. Improving LDA Topic Models for Microblogs via Tweet Pooling and Automatic Labeling. SIGIR (2013).

Social media

[BibTeX]

@inproceedings{mehrotra2013improving, author={Rishabh Mehrotra and Scott Sanner and Wray Buntine and Lexing Xie}, title={Improving LDA Topic Models for Microblogs via Tweet Pooling and Automatic Labeling}, booktitle={SIGIR}, year={2013}, }Merging tweets based on hashtags and imputed hashtags improves topic modeling.

Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, ChengXiang Zhai. Topic sentiment mixture: modeling facets and opinions in weblogs. WWW (2007).

[BibTeX]

@inproceedings{mei2007topic, author={Qiaozhu Mei and Xu Ling and Matthew Wondra and Hang Su and ChengXiang Zhai}, title={Topic sentiment mixture: modeling facets and opinions in weblogs}, booktitle={WWW}, year={2007}, }Qiaozhu Mei, Xuehua Shen, ChengXiang Zhai. Automatic labeling of multinomial topic models. KDD (2007).

User interface

[BibTeX]

@inproceedings{mei2007automatic, author={Qiaozhu Mei and Xuehua Shen and ChengXiang Zhai}, title={Automatic labeling of multinomial topic models}, booktitle={KDD}, year={2007}, pages={490-499}, }Qiaozhu Mei, Deng Cai, Duo Zhang, ChengXiang Zhai. Topic modeling with network regularization. WWW (2008).

Networks

[BibTeX][Abstract]

@inproceedings{mei2008topic, author={Qiaozhu Mei and Deng Cai and Duo Zhang and ChengXiang Zhai}, title={Topic modeling with network regularization}, booktitle={WWW}, year={2008}, url={http://portal.acm.org/citation.cfm?id=1367512}, }In this paper, we formally define the problem of topic modeling with network structure (TMN). We propose a novel solution to this problem, which regularizes a statistical topic model with a harmonic regularizer based on a graph structure in the data. The proposed method bridges topic modeling and social network analysis, which leverages the power of both statistical topic models and discrete regularization. The output of this model well summarizes topics in text, maps a topic on the network, and discovers topical communities. With concrete selection of a topic model and a graph-based regularizer, our model can be applied to text mining problems such as author-topic analysis, community discovery, and spatial text mining. Empirical experiments on two different genres of data show that our approach is effective, which improves text-oriented methods as well as network-oriented methods. The proposed model is general; it can be applied to any text collections with a mixture of topics and an associated network structure.

David Mimno, Andrew McCallum. Expertise Modeling for Matching Papers with Reviewers. KDD (2007).

[BibTeX]

@inproceedings{mimno2007expertise, author={David Mimno and Andrew McCallum}, title={Expertise Modeling for Matching Papers with Reviewers}, booktitle={KDD}, year={2007}, }David Mimno, Andrew McCallum. Mining a digital library for influential authors. JCDL (2007).

Bibliometrics

[BibTeX]

@inproceedings{mimno2007mining, author={David Mimno and Andrew McCallum}, title={Mining a digital library for influential authors}, booktitle={JCDL}, year={2007}, }David Mimno, Wei Li, Andrew McCallum. Mixtures of Hierarchical Topics with Pachinko Allocation. ICML (2007).

[BibTeX]

@inproceedings{mimno2007hierarchical, author={David Mimno and Wei Li and Andrew McCallum}, title={Mixtures of Hierarchical Topics with Pachinko Allocation}, booktitle={ICML}, year={2007}, }David Mimno, Andrew McCallum. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. UAI (2008).

[BibTeX]

@inproceedings{mimno2008dmr, author={David Mimno and Andrew McCallum}, title={Topic models conditioned on arbitrary features with Dirichlet-multinomial regression}, booktitle={UAI}, year={2008}, url={http://www.cs.umass.edu/~mimno/papers/dmr-uai.pdf}, }Per-document Dirichlet priors over topic distributions are generated using a log-linear combination of observed document features and learned feature-topic parameters. Implemented in Mallet

David Mimno, Hanna Wallach, Andrew McCallum. Gibbs Sampling for Logistic Normal Topic Models with Graph-Based Priors. NIPS Workshop on Analyzing Graphs (2008).

Networks

[BibTeX]

@inproceedings{mimno2008gibbs, author={David Mimno and Hanna Wallach and Andrew McCallum}, title={Gibbs Sampling for Logistic Normal Topic Models with Graph-Based Priors}, booktitle={NIPS Workshop on Analyzing Graphs}, year={2008}, url={http://www.cs.umass.edu/~mimno/papers/sampledlgstnorm.pdf}, }Introduces an auxiliary-variable method for Gibbs sampling in non-conjugate topic models.

David Mimno, Hanna Wallach, Jason Naradowsky, David A. Smith, Andrew McCallum. Polylingual Topic Models. EMNLP (2009).

Cross-language

[BibTeX]

@inproceedings{mimno2009polylingual, author={David Mimno and Hanna Wallach and Jason Naradowsky and David A. Smith and Andrew McCallum}, title={Polylingual Topic Models}, booktitle={EMNLP}, year={2009}, url={http://www.cs.umass.edu/~mimno/papers/mimno2009polylingual.pdf}, }David Mimno. Reconstructing Pompeian Households. UAI (2011).

Cross-language

[BibTeX]

@inproceedings{mimno2011reconstructing, author={David Mimno}, title={Reconstructing Pompeian Households}, booktitle={UAI}, year={2011}, url={http://www.cs.princeton.edu/~mimno/papers/pompeii.pdf}, }David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, Andrew McCallum. Optimizing Semantic Coherence in Topic Models. EMNLP (2011).

Evaluation

[BibTeX]

@inproceedings{mimno2011optimizing, author={David Mimno and Hanna Wallach and Edmund Talley and Miriam Leenders and Andrew McCallum}, title={Optimizing Semantic Coherence in Topic Models}, booktitle={EMNLP}, year={2011}, url={http://www.cs.princeton.edu/~mimno/papers/mimno-semantic-emnlp.pdf}, }A simple, automated metric that uses only information contained in the training documents has strong ability to predict human judgments of topic coherence.

David Mimno, David Blei. Bayesian Checking for Topic Models. EMNLP (2011).

Evaluation

[BibTeX]

@inproceedings{mimno2011bayesian, author={David Mimno and David Blei}, title={Bayesian Checking for Topic Models}, booktitle={EMNLP}, year={2011}, url={http://www.cs.princeton.edu/~mimno/papers/mimno-ppcs-emnlp.pdf}, }Posterior predictive checks are useful in detecting lack of fit in topic models and identifying which metadata-enriched models might be useful

Indraneel Mukherjee, David Blei. Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation. NIPS (2008).

Inference

[BibTeX]

@inproceedings{mukherjee2008relative, author={Indraneel Mukherjee and David Blei}, title={Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation}, booktitle={NIPS}, year={2008}, url={http://books.nips.cc/papers/files/nips21/NIPS2008_0434.pdf}, }Indraneel Mukherjee, David Blei. Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation. NIPS (2008).

Inference

[BibTeX]

@inproceedings{mukherjee2008relative, author={Indraneel Mukherjee and David Blei}, title={Relative Performance Guarantees for Approximate Inference in Latent Dirichlet Allocation}, booktitle={NIPS}, year={2008}, url={http://books.nips.cc/papers/files/nips21/NIPS2008_0434.pdf}, }Claudiu Musat, Julien Velcin, Stefan Trausan-Matu, Marian-Andrei Rizoiu. Improving Topic Evaluation Using Conceptual Knowledge. IJCAI (2011).

Evaluation

[BibTeX]

@inproceedings{musat2011improving, author={Claudiu Musat and Julien Velcin and Stefan Trausan-Matu and Marian-Andrei Rizoiu}, title={Improving Topic Evaluation Using Conceptual Knowledge}, booktitle={IJCAI}, year={2011}, }Ramesh Nallapati, Amr Ahmed, Eric P. Xing, William Cohen. Joint Latent Topic Models for Text and Citations. KDD (2008).

Networks

[BibTeX]

@inproceedings{nallapati2008joint, author={Ramesh Nallapati and Amr Ahmed and Eric P. Xing and William Cohen}, title={Joint Latent Topic Models for Text and Citations}, booktitle={KDD}, year={2008}, url={http://portal.acm.org/citation.cfm?id=1401957}, pages={542--550}, }This is one of the first papers to address joint topic models of text and hyperlinks. Used as a baseline in the more recent Relational Topic Models. (R.N.)

Ramesh Nallapati, William Cohen, Susan Ditmore, John Lafferty, Kin Ung. Multi-scale Topic Tomography. KDD (2007).

Temporal

[BibTeX]

@inproceedings{nallapati2007multiscale, author={Ramesh Nallapati and William Cohen and Susan Ditmore and John Lafferty and Kin Ung}, title={Multi-scale Topic Tomography}, booktitle={KDD}, year={2007}, url={http://portal.acm.org/citation.cfm?id=1281249}, pages={520--529}, }Models variation of topic content with time at various scales of resolution. A novel variant of dynamic topic models that uses the Poisson distribution for word generation, and wavelets. (R.N.)

Ramesh Nallapati, William Cohen, John Lafferty. Parallelized Variational EM for Latent Dirichlet Allocation: An experimental evaluation of speed and scalability. ICDM workshop on high performance data mining (2007).

Scalability

[BibTeX]

@inproceedings{nallapati2007parallelized, author={Ramesh Nallapati and William Cohen and John Lafferty}, title={Parallelized Variational EM for Latent Dirichlet Allocation: An experimental evaluation of speed and scalability}, booktitle={ICDM workshop on high performance data mining}, year={2007}, url={http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.4178amp;&rep=rep1&type=pdf}, }Early paper on parallel implementations of variational EM for LDA. (R.N.)

Ramesh Nallapati. multithreaded lda-c. (2010).

Implementations

[BibTeX]

@misc{nallapati2010multi-lda-c, author={Ramesh Nallapati}, title={multithreaded lda-c}, year={2010}, url={https://sites.google.com/site/rameshnallapati/software}, }Multi Threaded extension of David Blei's LDA implementation in C. Speeds up the computation by orders of magnitude depending on the number of processors.

David Newman, Chaitanya Chemudugunta, Padhraic Smyth. Statistical entity-topic models. KDD (2006).

[BibTeX]

@inproceedings{newman2006statistical, author={David Newman and Chaitanya Chemudugunta and Padhraic Smyth}, title={Statistical entity-topic models}, booktitle={KDD}, year={2006}, }D. Newman, S. Block. Probabilistic Topic Decomposition of an Eighteenth-Century American Newspaper. JASIST () 2006 pp. .

[BibTeX]

@article{newman2005probabilistic, author={D. Newman and S. Block}, title={Probabilistic Topic Decomposition of an Eighteenth-Century American Newspaper}, journal={JASIST}, year={2006}, }David Newman, Jey Han Lau, Karl Grieser, Timothy Baldwin. Automatic Evaluation of Topic Coherence. NAACL (2010).

Evaluation

[BibTeX]

@inproceedings{newman2010automatic, author={David Newman and Jey Han Lau and Karl Grieser and Timothy Baldwin}, title={Automatic Evaluation of Topic Coherence}, booktitle={NAACL}, year={2010}, }Xiaochuan Ni, Jian-Tao Sun, Jian Hu, Zheng Chen. Mining Multilingual Topics from Wikipedia. WWW (2009).

Cross-language

[BibTeX][Abstract]

@inproceedings{ni2009multilingual, author={Xiaochuan Ni and Jian-Tao Sun and Jian Hu and Zheng Chen}, title={Mining Multilingual Topics from Wikipedia}, booktitle={WWW}, year={2009}, url={http://www2009.eprints.org/158/}, }In this paper, we try to leverage a large-scale and multilingual knowledge base, Wikipedia, to help effectively analyze and organize Web information written in different languages. Based on the observation that one Wikipedia concept may be described by articles in different languages, we adapt existing topic modeling algorithm for mining multilingual topics from this knowledge base. The extracted "universal" topics have multiple types of representations, with each type corresponding to one language. Accordingly, new documents of different languages can be represented in a space using a group of universal topics, which makes various multilingual Web applications feasible.

Xuan-Hieu Phan, Cam-Tu Nguyen. GibbsLDA++. (2007).

Implementations

[BibTeX]

@misc{gibbslda++, author={Xuan-Hieu Phan and Cam-Tu Nguyen}, title={GibbsLDA++}, year={2007}, url={http://gibbslda.sourceforge.net}, }C/C++ implementation of LDA with Gibbs sampling.

Jukka Perkiö, Wray L. Buntine, Sami Perttu. Exploring Independent Trends in a Topic-Based Search Engine. Web Intelligence (2004).

[BibTeX]

@inproceedings{perkio2004exploring, author={Jukka Perkiö and Wray L. Buntine and Sami Perttu}, title={Exploring Independent Trends in a Topic-Based Search Engine}, booktitle={Web Intelligence}, year={2004}, pages={664-668}, }Matthew Purver, Konrad Körding, Thomas L. Griffiths, Joshua Tenenbaum. Unsupervised Topic Modelling for Multi-Party Spoken Discourse. ACL (2006).

[BibTeX]

@inproceedings{purver2006unsupervised, author={Matthew Purver and Konrad Körding and Thomas L. Griffiths and Joshua Tenenbaum}, title={Unsupervised Topic Modelling for Multi-Party Spoken Discourse}, booktitle={ACL}, year={2006}, url={http://web.mit.edu/cocosci/Papers/purver-et-al06acl.pdf}, }Daniel Ramage, Evan Rosen. Stanford Topic Modeling Toolbox. (2009).

Implementations

[BibTeX]

@misc{ramage-tmt, author={Daniel Ramage and Evan Rosen}, title={Stanford Topic Modeling Toolbox}, year={2009}, url={http://nlp.stanford.edu/software/tmt/tmt-0.3/}, }Scala implementation of LDA and LabeledLDA. Input and output integration with Excel.

Daniel Ramage, David Hall, Ramesh Nallapati, Christopher D. Manning. Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-Labeled Corpora. EMNLP (2009).

[BibTeX]

@inproceedings{ramage2009labeled, author={Daniel Ramage and David Hall and Ramesh Nallapati and Christopher D. Manning}, title={Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-Labeled Corpora}, booktitle={EMNLP}, year={2009}, }Daniel Ramage, Susan Dumais, Dan Liebling. Characterizing Microblogs with Topic Models. ICWSM (2010).

[BibTeX]

@inproceedings{ramage2010characterizing, author={Daniel Ramage and Susan Dumais and Dan Liebling}, title={Characterizing Microblogs with Topic Models}, booktitle={ICWSM}, year={2010}, url={http://www.stanford.edu/~dramage/papers/twitter-icwsm10.pdf}, }Joseph Reisinger, Austin Waters, Brian Silverthorn, Raymond J. Mooney. Spherical Topic Models. ICML (2010).

[BibTeX][Abstract]

@inproceedings{reisinger2010spherical, author={Joseph Reisinger and Austin Waters and Brian Silverthorn and Raymond J. Mooney}, title={Spherical Topic Models}, booktitle={ICML}, year={2010}, url={http://www.cs.utexas.edu/users/ml/papers/reisinger.icml10.pdf}, }We introduce the Spherical Admixture Model (SAM), a Bayesian topic model for arbitrary L2 normalized data. SAM maintains the same hierarchical structure as Latent Dirichlet Allocation (LDA), but models documents as points on a high-dimensional spherical manifold, allowing a natural likelihood parameterization in terms of cosine distance. Furthermore, SAM can model word absence/presence at the document level, and unlike previous models can assign explicit negative weight to topic terms. Performance is evaluated empirically, both through human ratings of topic quality and through diverse classification tasks from natural language processing and computer vision. In these experiments, SAM consistently outperforms existing models.

Michal Rosen-Zvi, Tom Griffiths, Mark Steyvers, Padhraic Smyth. The Author-Topic Model for Authors and Documents. UAI (2004).

[BibTeX]

@inproceedings{rosenzvi2004author, author={Michal Rosen-Zvi and Tom Griffiths and Mark Steyvers and Padhraic Smyth}, title={The Author-Topic Model for Authors and Documents}, booktitle={UAI}, year={2004}, }Ruslan Salakhutdinov, Geoffrey Hinton. Replicated Softmax: an Undirected Topic Model. NIPS (2009).

[BibTeX]

@inproceedings{salakhutdinov2009replicated, author={Ruslan Salakhutdinov and Geoffrey Hinton}, title={Replicated Softmax: an Undirected Topic Model}, booktitle={NIPS}, year={2009}, url={http://books.nips.cc/papers/files/nips22/NIPS2009_0817.pdf}, }Carson Sievert, Kenneth E. Shirley. LDAvis: A method for visualizing and interpreting topics. Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces (2014).

[BibTeX]

@inproceedings{sievert2014ldavis, author={Carson Sievert and Kenneth E. Shirley}, title={LDAvis: A method for visualizing and interpreting topics}, booktitle={Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces}, year={2014}, url={https://github.com/cpsievert/LDAvis}, }Shravan Narayanamurthy. Yahoo! LDA. (2011).

Implementations

[BibTeX]

@misc{YahooLDA, author={Shravan Narayanamurthy}, title={Yahoo! LDA}, year={2011}, url={https://github.com/shravanmn/Yahoo_LDA/wiki}, }Y!LDA implements a fast, sampling-based, distributed algorithm. See Smola and Narayanamurthy for details.

Alexander Smola, Shravan Narayanamurthy. An Architecture for Parallel Topic Models. VLDB (2010).

Scalability

[BibTeX]

@inproceedings{smola2010architecture, author={Alexander Smola and Shravan Narayanamurthy}, title={An Architecture for Parallel Topic Models}, booktitle={VLDB}, year={2010}, }Mark Steyvers, Tom Griffiths. Matlab Topic Modeling Toolbox. (2005).

Implementations

[BibTeX]

@misc{steyvers-tmtb, author={Mark Steyvers and Tom Griffiths}, title={Matlab Topic Modeling Toolbox}, year={2005}, url={http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm}, }Implements LDA, Author-Topic, HMM-LDA, LDA-COL. Tools for 2D visualization.

Mark Steyvers, Tom Griffiths. Probabilistic Topic Models. In Landauer, T., Mcnamara, D., Dennis, S., Kintsch, W., Latent Semantic Analysis: A Road to Meaning. (2006).

Where to start

[BibTeX]

@incollection{steyvers2006probabilistic, author={Mark Steyvers and Tom Griffiths}, editor={Landauer, T. and Mcnamara, D. and Dennis, S. and Kintsch, W.}, title={Probabilistic Topic Models}, booktitle={Latent Semantic Analysis: A Road to Meaning.}, year={2006}, publisher={Laurence Erlbaum}, url={http://cocosci.berkeley.edu/tom/papers/SteyversGriffiths.pdf}, }A good introduction to topic modeling.

Claudio Taranto, Nicola Di Mauro, Floriana Esposito. rsLDA: a Bayesian Hierarchical Model for Relational Learning. ICDKE (2011).

[BibTeX]

@inproceedings{taranto2011rslda, author={Claudio Taranto and Nicola Di Mauro and Floriana Esposito}, title={rsLDA: a Bayesian Hierarchical Model for Relational Learning}, booktitle={ICDKE}, year={2011}, url={http://www.di.uniba.it/~ndm/publications/files/taranto11icdke.pdf}, }Yee-Whye Teh, David Newman, Max Welling. A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation. NIPS (2006).

Inference

[BibTeX]

@inproceedings{teh2006collapsed, author={Yee-Whye Teh and David Newman and Max Welling}, title={A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation}, booktitle={NIPS}, year={2006}, url={http://books.nips.cc/papers/files/nips19/NIPS2006_0511.pdf}, }Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, David M. Blei. Hierarchical Dirichlet Processes. JASA (101) 2006 pp. .

Non-parametric

[BibTeX]

@article{teh2006hierarchical, author={Yee Whye Teh and Michael I. Jordan and Matthew J. Beal and David M. Blei}, title={Hierarchical Dirichlet Processes}, journal={JASA}, year={2006}, url={http://dx.doi.org/10.1198/016214506000000302}, volume={101}, }Kristina Toutanova, Mark Johnson. A Bayesian LDA-based model for semi-supervised part-of-speech tagging. NIPS (2007).

NLP

[BibTeX]

@inproceedings{toutanova2007bayesian, author={Kristina Toutanova and Mark Johnson}, title={A Bayesian LDA-based model for semi-supervised part-of-speech tagging}, booktitle={NIPS}, year={2007}, pages={1521-1528}, url={http://books.nips.cc/papers/files/nips20/NIPS2007_0964.pdf}, }Hanna M. Wallach. Topic modeling: beyond bag-of-words. ICML (2006).

[BibTeX]

@inproceedings{wallach2006beyond, author={Hanna M. Wallach}, title={Topic modeling: beyond bag-of-words}, booktitle={ICML}, year={2006}, }Hanna Wallach, Iain Murray, Ruslan Salakhutdinov, David Mimno. Evaluation Methods for Topic Models. ICML (2009).

Evaluation

[BibTeX]

@inproceedings{wallach2009evaluation, author={Hanna Wallach and Iain Murray and Ruslan Salakhutdinov and David Mimno}, title={Evaluation Methods for Topic Models}, booktitle={ICML}, year={2009}, url={http://www.cs.umass.edu/~mimno/papers/wallach09evaluation.pdf}, }Commonly used methods for estimating the probability of held-out words may be unstable. This paper presents more accurate methods.

Hanna Wallach, David Mimno, Andrew McCallum. Rethinking LDA: Why priors matter. NIPS (2009).

## Leave a Comment

(0 Comments)