Exact Markov chain Monte Carlo sampling I don’t like the naming. The word exact could mislead us to understand the concept. Anyway I used the word in the title because it was the title of the chapter of the book “Information Theory, Inference, and Learning Algorithms” by David Mackay, which I studied to learn the theory.
The different names of it are perfect simulation and coupling from the past. Maybe these are better names.
Monte Carlo method Monte Carlo method is useful in Bayesian data modeling because maximizing posterior probability is often very difficult and fitting a Gaussian becomes hard.
Monte Carlo method becomes valuable in that we want to generate samples in some situation, and also want to estimate some expectation values of various functions.
What we deal with in this post is only small part of Monte Carlo method. It is going to be good if I have a chance to introduce about all sooner in this blog, but if not, see one of the repositories from my Github
As I mentioned at the previous posting, one of the purposes of this blog is to supplement the Github of my data science study. I will gradually post and present all the iPython notebooks or Mathematica notebooks.
I felt there’s no good Python tutorial for spectral clustering (at least from my search). Who can’t use scikit-learn among who is serious about machine learning. It was not difficult to find the theory of spectral clustering as well.
You’ll find this post in your _posts directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run jekyll serve, which launches a web server and auto-regenerates your site when a file is updated.
To add new posts, simply add a file in the _posts directory that follows the convention YYYY-MM-DD-name-of-post.