Terribly and predictably, my goal of doing something significant with one book on my shelf each month proceeds in an unsatisfactory way. October's book is – was – Density-Matrix Renormalization ed. Peschel et al., from the Springer Lecture Notes in Physics series (#528).
I got the book something like 20 years ago after Pittel and Dukelsky started using the DMRG method in nuclear physics. I was a young academic looking to move beyond my PhD work, and to the extent that I felt competent about anything outside the narrow focus of my PhD, computational work was the thing, and I was intrigued to learn more about this method. My plan was to start by applying it to the Lipkin-Meshkov-Glick (LMG) model, a simplified version of the nuclear shell model, just to get to grips with the method, and see where that led me. In the end, I got diverted into some other methods for solving the LMG model, inspired by a collaboration with a condensed matter physics friend and we wrote up some work on that, which we ran out of energy to publish after one rejection, and now it sits as the first (but not last) of outputs I've written that have got no further than the arXiv.
Anyway - I never did become a practitioner of DMRG, though I felt like the basic idea was a simple enough concept in terms of a numerical algorithm: Start from a truncated space and find the eigenvalues and eigenvectors. Keep "the most important" (suitably defined) eigenvectors and throw away the rest. Increase the size of your problem by adding extra states that were not included in the original truncation, to replace the "unimportant" states that were thrown away. Solve for eigenstates in this new space, and keep discarding and enlarging in this way until you (hopefully) converge at a solution which is (hopefully) a good approximation to the true solution. This keeps the size of the space you ever need to deal with limited to a manageable value.
The "density matrix" part of the algortihm is to do with how one chooses the most important states. In the space being dealt with, the eigenvalues of the density matrix are found and it is those with greatest values that are selected as the "important ones". The "renormalization" is the shifting of model space through the algorithm, at least that's how I picture it, though really the words "renormalization" and "group" bind together to refer back to a method from Grassman algebras in field theory that inspired the forerunner of the DMRG method. Anthing to do with Group Theory is, at least to me, obscure by the time the DMRG has turned into a neat numerical method.
Anyway - the book... much of what I wrote above has come to me through being reminded of the method by reading the preface article "How it all began: A Personal Account" by Stephen White. (NB this is hidden in the "Front Matter" chapter if you try to access the book online). As an autobiographical reminiscence, this was very readable and helped me contextualise things. I then felt emboldened to go on to the first "real" chapter: "Wilson's Numerical Renomalization Group" by Theo Costi. Unfortunately I found this impossible to understand, as it supposed a high level of knowledge do the kind of condensed matter systems and models that the method was originally applied to. I gave up and tried the second chapter: "The Density Matrix Renormalization Group" (Noack and White) which was much more text-book like and a kind of tutorial for the method, with a first example being a partice in a discretised box (which the condensed matter physicists call "a single particle on a tight-binding chain") and I was able to follow the arguments here well. When it got to the details of exactly how the density matrix is used and the desired states projected out, I did not work through everything in detail, as I would have liked to have done in order to sketch out my own implementation on computer, but if I had given it the kind of time I hoped I would have given to each book in this project, I think I'd have got there.
Okay. Not long left to do November's book. ðŸ˜