17: arXiv paper; Jan 15

Here’s a neat scanning tunnelling microscopy (STM) paper from the 15th by Yin et al. titled “Negative flatband magnetism in a spin-orbit coupled kagome magnet”.

There’s a lot to unpack in the title (again) but most of it can be ignored (again). That’s the beauty of buzzwords.

It’s a long abstract but the key points are: 1) the material is a ‘kagome’ magnet Co3Sn2S2. 2) they measure a (flatband) peak in the electron density of states 3) this peak doesn’t shift in energy the way you expect it to when there’s a magnetic field.

The last point is probably the coolest. Imagine a bar magnet with a north and south pole. If you put near another magnet that’s generating a magnetic field, it’ll either move towards or away from it. Now, put another bar magnet next to the first, except flip it. If you turn on the magnetic field again, one of those magnets is going to move toward the source, and one is going to move away. This is because it’s more energetically favourable (try to pull a magnet off the fridge – it takes energy. it’s more energetically favourable to stick to the fridge.)

Picture1

This is similar to what’s in Fig. 4g of the paper – except, notice that no matter if the magnetic field (B) is pointing up or down, the spins (like the bar magnet in the example) both move the same way. Which is definitely not what you would expect, and definitely doesn’t happen in most materials.

BLAHRGHGG

And on top of that, notice that while you would expect the magnet pointing North to move up when the field is pointing up, it actually moves down! (This isn’t a perfect match with Fig 4g on the left, where going ‘down’ in energy is equivalent to the bar magnet moving physically up. Lower energy = more favourable, like a fridge magnet trying to stay stuck to the fridge.) Instead of going where it should go (up) it’s going away from where it should go (down) in a direction that increases energy instead. To understand why, we have to look a little deeper into this material itself, and the way the atoms are arranged.

This material, Co3Sn2S2 is made up of two sorts of layers. The Co3Sn ‘kagome’ layer where the Co atoms form a kagome shape (like a Star of David, see below for a long aside.), has a Sn atom in the middle of each hexagon, and then is sandwiched between a layer of S atoms. The other layer is only Sn atoms. What they did here is ‘cleave’ the sample by breaking off the top surface of a piece of crystal, by gluing a small pin to the top of it and then hitting the pin. Since there’s two types of surfaces (ones with S atoms and ones with Sn atoms), either surface could be exposed.

crys

Unlike the previous paper discussed that used a technique called angle resolved photoemission spectroscopy (ARPES), this paper uses a technique called scanning tunnelling microscopy (STM). Two main differences is that ARPES looks directly at the band structure but averaged over a large area, while STM indirectly measures the band structure but at a single atomic point. This means that unlike ARPES, STM can tell exactly which surface is being measured, which is important in this case because it’s different.

What they found is that there’s a peak in the density of states (a description of the electronic structure of the material) that exists only on the S surface, and not on the Sn surface. (I invite you to read the paper for further details.) Peaks are exciting! They can mean lots of things, but more importantly, you can manipulate the sample by changing the temperature, applying a magnetic field, applying an electric field, moving the sample, rotating the sample, etc. and see what happens to a peak. (It’s hard to see what happens to a featureless signal, because a featureless signal will probably remain featureless.)

blghsggg

Excitingly enough, something does happen to this S surface only peak. Imagine the peak being filled by a bunch of electrons with spins all pointing one way (again, you can think of them as tiny bar magnets, but don’t tell your physics teacher I said that.)

BALRHG GJ

Now, when they applied a magnetic field up, they found that all these electrons increased in energy–and then when the magnetic field was applied down, they increased in energy by the same amount.

Capture2

This is because there is a “flat band” that contributes to that peak, that has a different sort of magnetism (“orbital magnetism”) that’s negative of the ferromagnetic magnetism (the magnetism we usually think of when we think of magnets). Systems with flat bands are rare, and kagome systems are one of the very few systems that can have flat bands. So here, there’s evidence that this material does have a flat band, and more importantly, it’s near the Fermi level–or the energy where experiments can actually measure.

The orbital magnetism and flat band tie into some topological physics (this material’s attracted a lot of attention because it’s theoretically a Weyl semimetal), which also makes it extremely interesting from a theoretical point of view.

And again, here’s the paper for those who want to look at the figures in the original context!

And, some more elaboration on kagome lattices:

Continue reading →

13: it’s been a while

It’s been a while since my last post – and even longer since my last real post. Part of it is that I was sick for a while, but the other part is that I have 3 draft posts, and 0 complete.

The motivation in starting this blog was (other than to show off my cats) present and communicate an area of science that is usually ignored. My friends have frequently heard my complaints that space, black holes, even plasma physics – these are exciting! Relatable! People like learning about planets and galaxies! An outsized portion of science outreach is done by astrophysicists. Which, in my opinion, is great. Astrophysics is what set me on the path to science curiosity, these mysterious objects called “black holes” that we can’t see because they’re, well, black. Everything I know now about black holes is from those formative years, Wikipedia, and Interstellar.

However, this doesn’t change the fact that there are so many fascinating areas of science that are washed over and ignored. For all my gripes, it makes sense. It’s difficult to be interested in something you can’t relate to. Someone I know studies cells in mosquitoes – everyone knows what a mosquito is, and even if you think ‘yuck’, you might still want to know. But if I’m presented a paper written in Farsi on livestock breeding – what? Why? I can’t even understand Farsi.

And here we have one of the barriers to communicate about topics that are so far removed from public perception, it’s an alien language altogether.

I have tried to start science blogs several times in the past; documenting science itself, and documenting research and interests as an undergraduate. Most recently, this has been sparked not by an internal desire, but because of a press release on a (scientific) paper published. It turned out I had no idea how to communicate the contents of that paper to a general audience in any conceivable way. (The final press release couldn’t either, but that’s the scientist in me speaking.) So I took a science writing class. Learned a bit about “the other side”, the challenges in straddling journalism and science, and the reasons why scientists and science communicators tend to butt heads.

The class is over, but I didn’t want to let it end like that. There’s another paper coming up that might get a press release too. So! I’ll keep trying, and hopefully one of those three drafts will be finished and posted soon, with some semblance of public comprehension!

9: Funding Fundamental Science

A recent piece by Nobel Prize Laureate Donna Strickland titled “Scientists Need Time To Make Discoveries” is incredibly relevant in today’s landscape of grant and proposal scrambling. People don’t want to pay for things if they have no use.

Therefore, here is yet another piece I did for my science writing class (which previously included a paragraph that my teacher commented were “seems like a bunch of loosely related things you’ve been wanting to vent about and figure this is your chance. ” but would make for a great separate article…so maybe, that’ll happen too!)

Buzzwords, quantum computers, and hoverboards

Here’s a secret: quantum computers are not in our immediate future. But pick up a physics paper written by sensible academics, and you’d be forgiven for thinking that a future with hyperfast, ultra-secure computing is just beyond the horizon. Or maybe one with ‘spintronics’, devices that go beyond your conventional electronics in some vague, nebulous way. Endless electric currents, memory devices that will never fall prey to stray magnetic fields, unbreakable cryptography, thermoelectric power generators, and the always popular but rarely defined ‘future applications.’

This problem is particularly pervasive in the field known as condensed matter physics or solid-state physics. Grab a person off the street, and I can almost guarantee you your average person won’t have a clue what those words mean. But space? Stars? The universe? Who doesn’t want to go to the moon. Of course, we should be spending billions of dollars on a mission to Mars. But spend a couple million dollars on something no one’s ever heard of? Why would anyone do that? It’s no wonder that these physicists wrap up their work in exaggerated buzzwords that border on magic, even if they’re rooted in reality. (After all, ‘quantum physics’ is the basic building block of fundamental physics and the only way we can describe nature as we go down to the atomic level.) Maybe if we get a couple hoverboards back to the future style, we’ll talk about it. Oh wait, you mean this ‘condensed matter physics’ thing can give us hoverboards? Now we’re talking.

Hoverboards are cool, and the prototype Lexus developed in 2015 is beyond cool, but that’s old science. (It works by exploiting how magnets and superconductors repel each other, but even the ‘high temperature superconductors’ it uses needs to be cooled to about -320 F.) We got there in the 90s. Physics is out here studying phenomena no one’s ever heard of and discovering phenomena even physicists never guessed existed. But what physics does run on is money. Lots of money. Funding agencies are well beyond hoverboards but throw in words like ‘topological quantum computation’ and they’re listening. The idea of ‘braiding’ these ‘non-Abelian’ ‘Majorana fermions’ for ‘fault-tolerant’ computation is too good to resist, even if no one understands what these words mean individually, never mind when they’re combined. But buzzwords are useful for getting across the idea that this is important. Neon flashing lights important. Even without these nebulous ‘future applications’, these buzzwords signal that this is the cutting edge of research, that this is what’s new, this is what you should be paying attention to.

But none of this means that the science isn’t valid. The science is fascinating. In the words of every physicist ever: it’s cool. The very frontiers of human knowledge and understanding are being pushed beyond imagination, and incremental discoveries are being made every day, one painstaking cup of coffee at a time. The investigation of fundamental physics has resulted in ‘future applications’ like the well-known transistor, and transistors have grown so small that quantum effects do need to be taken into account. The discovery of high temperature superconductors like the ones that were used in the working hoverboard has been instrumental in developing powerful magnets and are so common in medical uses that no one stops to think about the science and fundamental physics that went into understanding superconductivity.

For every published paper that leads to advancing the well-being of society, there are thousands that represent the tiniest steps in advancing an obscure, insulated field. Ultimately, this is what science–and especially physics–is. An incredibly slow learning process of what makes the world tick. Every now and then physicists might need to fall back on quantum computing or hoverboards to explain why they’re doing what they’re doing, but at the end of the day, physics isn’t about ‘future applications’. It’s about breaking through frontiers, one quantum leap at a time.

Oh, and it’s just really, really cool.

4: arXiv paper; Jan 5

This is a paper that appeared on the arXiv on Jan 5 by Feng. et al titled “Discovery of Weyl nodal lines in a single-layer ferromagnet

…and right away we encounter a problem of “what do those words even mean?” (Don’t worry about it. Let’s come back to that.)

Here is the abstract in full:


Two-dimensional (2D) materials have attracted great attention and spurred rapid development in both fundamental research and device applications. The search for exotic physical properties, such as magnetic and topological order, in 2D materials could enable the realization of novel quantum devices and is therefore at the forefront of materials science. Here, we report the discovery of two-fold degenerate Weyl nodal lines in a 2D ferromagnetic material, a single-layer GdAg2 compound, based on combined angle-resolved photoemission spectroscopy measurements and theoretical calculations. These Weyl nodal lines are symmetry protected and thus robust against external perturbations. The coexistence of magnetic and topological order in a 2D material is likely to inform ongoing efforts to devise and realize novel nanospintronic devices.

In the paper, the authors claim that when they grow a single layer of GdAg2 of a single atomic layer thick, hence 2D (think graphene), they find the existence of ‘Weyl nodal lines’. (This is neat because these ‘Weyl nodal lines’ are ‘symmetry protected’ – so unless you do something violent to the system, they won’t go away. A magnetic field will break this symmetry – it breaks ‘time reversal symmetry.’) This material has mirror symmetry, and this is the symmetry that ‘protects’ the nodal lines. It’s possible that technology can use these nodal lines in electronics (in this particular case, spintronics).

In 2D GdAg2, it turns out that when you apply a magnetic field in a certain direction, it will gap out the nodal lines. Here’s a schematic:

A Weyl node can be thought of as a single point where these two cones touch. If you ‘push’ these two cones into each other, then they intersect at a loop of a line instead of a point. Hence, nodal line. If it’s symmetry protected, you can’t open a gap between the two. If you break the symmetry, it can open a gap.
temp2<img
temp
(source)

In the paper, they calculated that if you apply a magnetic field in the plane (M//x), it gaps out all of these nodal lines (everything is red). BUT if it’s applied perpendicular to the 2D plane (M//z) some of these nodal lines aren’t gapped out (some of those circles are still black).
temp

The technological idea is that this gap can be turned on and off, and this can be used in electronics (similar to a 1 or 0 computer byte.)

Their main scientific claim is that the angle-resolved photoemission spectroscopy (ARPES) measurements shows that the band structure matches up with their theoretical calculations (not in a magnetic field) and therefore, this material has nodal lines. There are actually 4 nodal lines in this material – but by breaking other symmetries, 2 of them are destroyed (#1 and #2) and 2 survive (#3 and #4). When they look at where nodal line #3 should be, it looks gapless from multiple directions, and I quote: “These results indicate that the band crossing points near the Fermi level form a closed line surrounding the Γ point, which agrees well with the shape of NL3.

temp
(The lower panels are the 2nd derivative of the upper panels, which shows the bands more clearly).

Overall this is really neat, and the crossing looks pretty gapless to me. Their measurement technique doesn’t let them apply a magnetic field so they can’t directly measure if the gap opens and closes, but it would be cool if someone could show that it did. Ferromagnetism in 2D materials is rare in and of itself, making the material interesting in its own right.

(Authors: if you ever see this…my apologies for the extreme simplification.)

2: A Persistent Helium Shortage

To get things going: this was originally written for a scientific writing class, and is a shortened version of a “full length” feature article. It briefly covers the current US (and in part, global) shortage of helium – and why it has such a large impact on science.

No helium, no party balloons. No squeaky voices. No MRI machines, and certainly no Higgs bosons. The world seems to be constantly threatened by helium shortages, and party planners have felt the pinch every time, with helium no longer on the shelves.

Helium, however, is used for far more than impersonating chipmunks and getting balloons caught in the trees. Approximately an entire third of the helium consumed in the US is used for cooling to extremely low temperatures (‘cryogens’), with some other major uses being welding and electronics fabrication, as well as providing lighter than air lift. While some of these applications may be able to use substitutes for helium, it is critical for MRIs and certain scientific experiments, as it is the only way to reach the extremely low temperatures necessary to run the powerful magnets these tools depend on.

These magnets work by running a current through coils of superconducting wire of a niobium-titanium alloy (Nb-Ti) that needs to be at temperatures below -263 C (-442 F) to operate, or 10 Kelvin. (Since working with such large negative temperatures is impractical, scientists prefer to use a temperature scale known as ‘Kelvin’ where 0K is absolute zero, the coldest temperature allowed in the universe.) Above this ‘critical temperature’, a superconductor will act exactly like a ‘normal’ metal instead of being in a superconducting state, where their resistance is zero and an electric current inside it will flow forever. Not only would it cost an enormous amount of energy to keep a current flowing through a non-superconducting wire, non-superconducting wires are unable to generate the extremely high magnetic fields MRIs need, about two hundred times as strong as the strongest fridge magnet. While liquid nitrogen is cold enough to create ice cream, its boiling point of 77K (-196 C or -321 F!) is far too high for these magnets to work. On the other hand, in its liquid form, helium has a boiling point of about 4.2K, cold enough to keep the Nb-Ti wires superconducting.

Superconducting magnets are also indispensable in scientific research. The National High Magnetic Field Laboratory (MagLab) in Florida has superconducting magnets that can reach 100 times the strength of the magnets used in MRIs. The Large Hadron Collider, where the so-called ‘God particle’ or Higgs boson was discovered, uses 120 tonnes of liquid helium to keep its magnets at the operating temperature of 1.9K, colder than the 2.7K of outer space. (By using special scientific fridges, scientists are able to reach even lower temperatures with liquid helium than 4.2K.) However, the cooling power of liquid helium is used for far more than operating superconducting magnets. The building blocks of quantum computers are typically studied well below 1K, and SPIDER, a ‘telescope’ carried above the atmosphere by a balloon to search for the earliest epochs of the universe, keeps its cameras at 0.25K. Without liquid helium, for many scientists, research grinds to a halt—and medical use takes priority when the supply of liquid helium runs short.

But how exactly do we find ourselves mired in helium shortages almost every other year, when helium is the second most abundant element in the universe? Because helium is lighter than air, it tends to float off into space once it’s released. This means Earth is a one-way valve – helium goes out, but doesn’t come in. Out in the universe, most helium is created in stars, but it’s currently impossible to replicate this process on earth. As it stands, the only viable source of helium production is as a byproduct of natural gas extraction. According to the 2018 US Geological Survey, the U.S. produced 63 million cubic meters of helium out of a global total of 160 million cubic meters. The Bureau of Land Management, a branch of the Department of Internal Affairs, which manages a helium reservoir near Amarillo, Texas, released a further 28 million cubic meters. The next biggest helium producer is Qatar, with 45 million. In fact, 95% of the helium the US imports is from Qatar. However, the situation in the Middle East is far from stable. Precipitated by the embargos enforced on Qatar in the summer of 2017, with no way to export the helium byproduct from the natural gas extraction, Qatar’s helium production plants went offline, taking over 25% of the global supply with it. While the worst of the crisis has ended and Qatar has slowly begun to produce helium again, its impact persists well over a year later. Some headway has been made into both reducing helium usage (e.g. cryogen-free ‘dry’ fridges) and increasing production (Qatar has a third helium plant slated to come online in 2019, with a possible fourth plant in the works). Nonetheless, at present, the available helium supply is clearly insufficient and precarious.

Maybe the shadow of this current shortage is finally beginning to lift but given both historical data and the possible continued disruption of a major supply source, particularly amid global political tensions, it seems unlikely that this will be the last one. All researchers can hope for is for the situation to not worsen, and for a government that will recognise the importance of scientific research.

 

 

1: Belated Beginnings

In a misguided attempt at self-accountability for the upcoming year and 5 years late, and as an exercise in science communication, I’m intending to choose one or two arXiv papers a week from cond-mat (or other interesting papers) and write an intelligible summary of what it’s about. A brief intro, the conclusion, and probably an oversimplified analogy.

Possibly also a short update of Science accomplished during the week, in as vague and unexplanatory a form possible. And pictures of my cats.

Lezzgetit.