Tuesday, 19 November 2024

October book: Density-Matrix Renormalizatiom.

Terribly and predictably, my goal of doing something significant with one book on my shelf each month proceeds in an unsatisfactory way.  October's book is – was – Density-Matrix Renormalization ed. Peschel et al., from the Springer Lecture Notes in Physics series (#528).  

I got the book something like 20 years ago after Pittel and Dukelsky started using the DMRG method in nuclear physics.  I was a young academic looking to move beyond my PhD work, and to the extent that I felt competent about anything outside the narrow focus of my PhD, computational work was the thing, and I was intrigued to learn more about this method.  My plan was to start by applying it to the Lipkin-Meshkov-Glick (LMG) model, a simplified version of the nuclear shell model, just to get to grips with the method, and see where that led me.  In the end, I got diverted into some other methods for solving the LMG model, inspired by a collaboration with a condensed matter physics friend and we wrote up some work on that, which we ran out of energy to publish after one rejection, and now it sits as the first (but not last) of outputs I've written that have got no further than the arXiv.  

Anyway - I never did become a practitioner of DMRG, though I felt like the basic idea was a simple enough concept in terms of a numerical algorithm:  Start from a truncated space and find the eigenvalues and eigenvectors.  Keep "the most important" (suitably defined) eigenvectors and throw away the rest.  Increase the size of your problem by adding extra states that were not included in the original truncation, to replace the "unimportant" states that were thrown away.  Solve for eigenstates in this new space, and keep discarding and enlarging in this way until you (hopefully) converge at a solution which is (hopefully) a good approximation to the true solution.  This keeps the size of the space you ever need to deal with limited to a manageable value.  

The "density matrix" part of the algortihm is to do with how one chooses the most important states.  In the space being dealt with, the eigenvalues of the density matrix are found and it is those with greatest values that are selected as the "important ones".  The "renormalization" is the shifting of model space through the algorithm, at least that's how I picture it, though really the words "renormalization" and "group" bind together to refer back to a method from Grassman algebras in field theory that inspired the forerunner of the DMRG method.  Anthing to do with Group Theory is, at least to me, obscure by the time the DMRG has turned into a neat numerical method. 

Anyway - the book... much of what I wrote above has come to me through being reminded of the method by reading the preface article "How it all began: A Personal Account" by Stephen White.  (NB this is hidden in the "Front Matter" chapter if you try to access the book online).  As an autobiographical reminiscence, this was very readable and helped me contextualise things.  I then felt emboldened to go on to the first "real" chapter: "Wilson's Numerical Renomalization Group" by Theo Costi.  Unfortunately I found this impossible to understand, as it supposed a high level of knowledge do the kind of condensed matter systems and models that the method was originally applied to.   I gave up and tried the second chapter:  "The Density Matrix Renormalization Group" (Noack and White) which was much more text-book like and a kind of tutorial for the method, with a first example being a partice in a discretised box (which the condensed matter physicists call "a single particle on a tight-binding chain") and I was able to follow the arguments here well.  When it got to the details of exactly how the density matrix is used and the desired states projected out, I did not work through everything in detail, as I would have liked to have done in order to sketch out my own implementation on computer, but if I had given it the kind of time I hoped I would have given to each book in this project, I think I'd have got there. 

Okay.  Not long left to do November's book.  😭




Thursday, 7 November 2024

... and major new discoveries in this area ceased to be made in Cambridge

In Charles Clement's biographical article on Tony Lane, the following matter of fact description of the demise of nuclear and high energy physics in Cambridge is summarized neatly as

"In the late 1950s, theorists in high-energy fundamental-particle physics in Cambridge moved out of the Cavendish Laboratory away from experiment into the Department of Applied Mathematics and Theoretical Physics (DAMTP), and major new discoveries in this area ceased to be made in Cambridge."

 - a salutory warning to us all!

 

Wednesday, 6 November 2024

My name will live on forever

Some time ago, I blogged about the over-use of girls' names used to name physics things.  This was prompted by a Tweet stating "we are not decoration!"

Today I learnt about a new facility in South Africa called PAUL.  This is my name, and it's a male name, so I guess it slowly helps redress the balance. 

It's an underground facility for the kind of experiment where you want a low background of cosmic radiation.  Here's a picture below of a CAD picture of part of the setup, from the website above.



Thursday, 10 October 2024

Solid State Physics: Ashcroft and Mermin

 So... at the beginning of last month I posted about a plan to take one of the books I have on my physics bookshelf each month and do something worthwhile with it - learn something that I didn't know, work through some problem and see what enlightenment I get, even kick-start a research project, and then report back here with what I've learnt.

The first book, as prompted by a post on X (a website I have left in favour of BlueSky), was Ashcroft and Mermin's weighty textbook on Solid State Physics.  I should preface this whole post by saying that I'm a bit disappointed (with myself) for not carefully managing my time to get more out of the book, but, you know, life.  It's been probably a busier September than most years.  It's the one year that all 4 of my kids are in school, with the youngest starting Reception Year and the oldest in Year 13, and its freshers flu season, and all sorts of other reasons ... I got "promoted" to be the representative of nuclear physics at STFC's science board at very short notice and that knocked out a couple of days. 

Well, so much for the excuses.  What about the book?  

I picked up a copy of the book when I was a PhD student, bought from Oxford's Blackwell's bookshop.  There's still a sticker on the back of the book telling me that I paid £25.95 for it, which even in the late 90s was not a bad price for a hefty 800-page hardback advanced-level textbook.  Despite the pretty poor rate of the PhD stipend in those days, it was the first time in my life I felt rich enough to splash out £26 on textbooks.  My PhD was not in Solid State Physics but I recognised it as an interesting area that I understood to be a ripe area for a would-be theoretical physics to do a PhD in.  Probably it was foolish of me to go fo nuclear physics over solid state, but I had largely found the teaching of Solid State physics uninspiring as an undergraduate and had not been motivated to learn very much of it, and felt ill-prepared for further study.  Well, I bought the textbook to have as a reference and perhaps I thought I'd even study it and learn from it, an indication that I was not as self-aware then as I am now.  Or at least I saw a practically endless life with copious spare time stretching in front of me in a way I don't now.

Picking up the book now, I started by reading from the beginning with the Drude Theory of Metals - a picture in which mobile electrons form a gas following the laws of kinetic theory.  The theory dates back to only just after the discovery of the electron, but before the structure of atoms was understood, and before the laws of quantum mechanics, so vital for atomic and solid state structure so generally, were known.  As a model it does a reasonable job (order of magnitude or better) of giving free electron densities and resistivities of metals; it can describe the Hall effect, and thermal conductivity.  The Drude model was something I had studied once upon a time, but I would have been hard pressed to say anything about it now, before re-learning from Ashcroft and Mermin.  

I carried on skimming through the following chapters to get an idea of the broad brush of development of ideas, but then decided that I wanted to learn at least something that might be a bit more useful to me, so I jumped way ahead to near the end of the book, to the chapter on Electron Interactions and Magnetic Structure.  It contains introductions to the kinds of spin Hamiltonians familar as standard models to me as a quantum many-body physicist: the Heisenberg model and the Hubbard model.  I realise I had a very facile view of the development of these models, supposing that they started from the assumptions that you could imagine lattices in atoms with magnetic moments were fixed in place and you supposed a very simple interaction between the atoms based on the relative orientation of the neighbouring spins.  In reality, there is much more to it, and this is brought out nicely in the book.  For a start, though they are models of magnetism, the authors emphasise the electrostatic origin of the magnetic effects.  Mainly because of the required antisymmetry of the overall wave function, the spin orientation of atoms in a lattice can be determined, with the spin having to match (or "anti-match", I suppose) with the spatial part of the wave function, which itself is determined mainly by electrostatic effects.  Actual magnetic interactions between atoms are a smaller effect when it comes to how the atoms line up to give macroscopic magnetism.  Interesting!  

Of course, I would have liked to have gone further, and worked through some examples to do actual calculations, and maybe worked through a problem or two at the end of the chapters.   I am already a week late writing this up here, though, and a week late starting the October book, so alas I will leave A&M behind for now.  On the other hand, I have to submit some ideas for BSc final year projects for Physics students at Surrey, so maybe I'll set one on the Heisenberg model, and vicariously live my continuing interest in this stuff through my student. 

To my regret, this exercise resulted in a nasty splodge on the fore-edge of the book when I left it in my bag with a too-ripe banana.  Still, perhaps that's better than having the book look pristine through being barely touched since its purchase nearly 30 years ago


Monday, 23 September 2024

Commenting on research

 I recently submited my first comment on another research paper.  The paper I commented on is "Full quantum eigensolvers based on variance" by Li et al., Phys. Scr. 99, 095207 (2024).  It caught my eye because I am interested in variance-minimization techniques in quantum computers, and have used them myself to solve some problems.  

The paper of Li used as one example the case of the deuteron nucleus as given in a highly-cited paper by Dumitrescu et al., but unfortunately they made several mistakes when applying it, and I wanted to make a comment to help correct the record. 

In brief, though the Dumitrescu work deals with a deuteron nucleus and calculates the binding of a proton an neutron, Li et al. describe it as a molecular system giving results in units incorrect by orders of magnitude.  The Dumitrescu Hamiltonian is a bit unusual compared with the majority of many-body quantum formalism in that it is a one-body model for a two-body system, so the deuteron is treated as a single particle, in which the number of occupied levels in an oscillator basis translates directly to the number of deuterons.  Mathematically, the model can give you back a "zero deutron" solution in which nothing is present.  The energy associated with that is zero, and the result is "trivial" in that there is no actual calculation to be done to get it.  Li et al. appeared not to understand any of this, and presented results as if they were all deuteron results, including the result with no deuterons present.  Thanks to some rounding, they also had the no-particle solution at a non-zero energy.  They showed how remarkably quickly their algorithm found this and other trivial solutions, not realising that these "solutions" could not test their algorithm at all.

This was one example among a few in their paper, but it seemed like enough for a comment.  Not to say that the whole paper was wrong, but to flag up some things that should not be propagating through the open literature.  

So, I wrote a brief comment paper and sent it to Physica Scripta.  I got a rejection from them, and as per IoP journal policy, there is no right to query a rejection.  If there was, I would have argued back that the referee actually agreed that misidentifying units and nuclear nature were problems in the paper (possibly a "clerical error" and that it didn't matter because someone else had made the same mistake before them).  The referee appeared not to understand probably the most important point, or just did not address it in the referee report, that the nature of the model and its trivial solutions was not understood by Li et al.  They concluded that in any case, mistakes in one example did not justify the publication of a comment.  Before submitting the comment, I checked the Physica Scripta guidance about comments to check that indeed only a single problem sufficed to justify a comment and not a slew of problems. 

Well ... I could try to find the email address of the editorial board, since the notification email I got with the judgement is from a junk address at the editorial management system, and present my case, but I feel like writing the comment paper has been shouting into the void enough, and so instead I'm writing this blog post where at least it is on the record.

edit:  Let me add, too, that I couldn't put the comment paper on the arXiv because they do not allow such things (comments on papers that are not themselves in the arXiv).  It would be nice to find somewhere to post the paper.

Sunday, 1 September 2024

Working through books


To me, when learning things, I still consider books an important go-to resource.  When I was at school, I would use my pocket money for some typical child things – copies of the Beano, 7" singles, sweets, but I also, especially when a bit older, but still very much a child, bought books specifically to teach me things.  At early teenage, that was mainly computing, mathematics, astronomy and electronics, though somtimes I managed to pick up some books on what would become my main academic interest of physics.

As I became a physics student, as an undergraduate, I continued to get my own copies of books when I could afford to and/or found intereting things in second hand shops.  As time has gone on, I have carried on getting hold of books through second hand shops or websites, buying them new, getting given them by retiring colleagues, and from closing-down sections of University libraries who gave staff first refusal before dumping them.  

As of today, there are somewhere between 200 and 300 books that I can reach for when I am sitting at my desk working, and I realise that some of them I may never feel the need to pick up and open ever again.  That's not because there's nothing in the books of use or interest to me, but rather that unless I make there effort, then one day that I don't pick up a given book, followed by another, can lead to all my remaining days without making use of them.  Of course, it doesn't really matter, and I needn't let the rest of my life be dictated by what books I happen to have to hand right now, but ... I have a plan.

If I choose one book per month then I should be able to cover the collection, more or less, by the time I retire, depending when I do that, and what "retirement" means for projects like working through these books.  By "working through" I mean that I will devote as much time as I can manage in each month to reading the book, and seeing what value I can get out of it.  That might be anywhere from reminding myself of something I used to know as an undergraduate, or maybe half-learned but didn't understand once, to triggering a research project leading to an original result and a publication, to an interesting anecdote, to running a final year BSc project on the topic.

I slightly fear for my ability to strike the right balance of finding enough time to get something useful from each book without it taking up time for other things, but I'm hoping that there will be net benefit of serendipitous finds and general intellectual enrichment that will make it worthwhile.  If not, why do I even have all these books?

So, it's 1st September today, so I should pick a book for September 2024.  This is going to be a hard part of the project.  I will do that quickly, and edit this post with the result,  If you can see from the enlarged version of tha attached image anything that you want me to pick, please comment and I may well start with that.

edit: So I've had a request on X to start with Ashcroft and Mermin's Solid State Physics, and so it begins


Thursday, 22 August 2024

Solving Diophantine Equations with Grover's Algorithm

Today on the arXiv a paper appeared, written by Lara Tatli and me.  The paper is on solving Diophantine equations on a quantum computer using Grover's algorithm.  

The project started last summer when Lara (an undergraduate physics student at Durham) got in touch to ask if she could do a summer project with me.  I had an idea to use a quantum search algorithm to find roots of equations in a more efficient way than a classical algorithm might, and I suggested to her we could start looking at that.  My initial idea was very vague but I thought perhaps we could encode floating point numbers in an interesting way search for values of x which solve f(x)=0 for some hopefully interesting functions.  This "root finding" is a common basic tool of numerical analysis and many problems can be formualted this way.

When looking at how to encode the numbers on a quantum computer, it became clear that starting from the simplest case - where the numbers are just straight binary representations of integers - would be the easiest place to start, but still interesting enough.  Equations where the numbers can only take on integer values are called Diophantine equations after Diophantus of Alexandria who studied them in the 3rd century.  The standard definition of Diophantine equations has them acting in at least two variables, and a simple example is finding all the Pythagorean Triples - the set of right-angled triangles with integer sides.  The basic example there is {3,4,5} since these integers satisfy Pythagoras' theorem 32+42=52.  There are an infinite number of solutions of the equation x2+y2=z2 where x,y,z are all positive integers.  Fermat's Last theorem famously states that there are no solutions of xn+yn=zn where x, y, and z are positive integers and n is an integer greater than 2.  

Anyway - our ambition is, at this stage, at the level of a simpler Diophantine equation, and we just looked at x+y=5, arbitrarily chosen.  Here, there are lots of solutions if you allow x and y to be negative integers, while if you restrict to x,y>=0 then there are solutions (x,y)=(0,5), (1,4), (2,3), (3,2), (4,1), (5,0).  This is what we were hoping to find.  

Grover's algorithm works by constructing a quantum phase oracle that can take a linear superposition of encodings of all possible (x,y) pairs within some range of values of x and y (we encoded each with 3 bits, so from 0 to 7) and change the phase of the quantum amplitude for each component of the linear superposition that represents a valid solution.  Then a neat trick uses quantum interference to amplify those states which had been marked out, while reducing the amplitude of the others.  In general several cycles of this marking and amplification can be needed to pick out the answers, but - crucially - fewer cycles than a non-quantum search.  It's a very standard quantum algorithm, first worked out by Grover in the 90s.  Applying it to simple Diophantine equations like we have done doesn't seem to be something that people have done before, though I dare say now our paper is up on arXiv, someone will come along and point out that it has been implemented already.  It's a pretty simple thing, but has the potential to be worked up in to a more powerful solver of more interesting equations.

Here's the final plot from the paper where we show the quantum state of the quantum comptuer after two iterations of Grover's algorithm, where we see that there are four quantum states picked out which we have labelled with the encoding of the x and y values.  If you split the 6-bit binary strings into two 3-bit strings for the values of x and y, you'll see they add up to 5.