Home Blog Page 7

The Ambiguity of the 2nd law: Objections

0

The 2nd law of thermodynamics can be stated in many ways and all of them are correct. The most common form of the second law of thermodynamics states that the entropy of a system always increases with time, i.e. \frac{ds}{dt}\geq 0. This law is so important in physics that if you pen a theory that violates conservation of energy or charge, it’s fine, but it should not violate the 2nd law of thermodynamics.

The 2nd law, entropy never decreases
The circles with bigger radius indicate higher entropy. The direction of arrow is the direction of evolution of the system

In the 21st  century, this law seems to be simple and straightforward, but the scenario was not like this in the 19th century. There have been many objections regarding this law, and there still are many. Some of them are so important, and let us go through them today.

Loschmidt’s reversibility objection.

The basic underlying principle of statistical physics is Newton’s laws. The particles described in Maxwell-Boltzmann statistics obey Newton’s laws. The Newton’s laws have a property that they obey time symmetry, i.e. for every possible motion forward in time there is also another possible motion that happens backwards in time, just with the reversed direction of momentum.

Loschmidt objected by stating that, from the symmetry property of Newton’s laws, for every arrow in phase space with \frac{ds}{dt}> 0, there is an arrow with \frac{ds}{dt}< 0. And his objection was indeed correct.

Loschmidt’s objection: for every forward arrow there is a corresponding reverse arrow

The solution for this objection is as follows:
We need to break the symmetry and that can be done by introducing a boundary condition. The particular boundary condition we impose is called the past hypothesis.

The past hypothesis prevents the movement of system from higher to lower entropy

Past hypothesis: “Universe began in a low entropy state”. This causes all the arrows starting from high entropy state going towards low entropy state cancels. This thing is indeed true, as near the big bang, the universe started with a very low entropy. But we really don’t have any idea why the early had very low entropy. That is the question that is unanswered yet.

Zermelo’s Recurrence Object:

Henri Poincaré once stated that “a closed bound classical system returns to any configuration infinitely often”.
What this means is that supposed we have a configuration of some particles and we let the system evolve, then Poincaré says that if we wait long enough, the system will come back to its initial state once in a while and the process repeats itself.

Ernst Zermelo was a mathematician and he made an objection using Poincaré’s statement. He said, “a function S(t) (entropy as a function of time) can’t be both recurrent and monotonic.” I.e. we can’t have the same entropy state as initial if the entropy is increasing in any one direction.

Ludwig Boltzmann tried to give a solution to this by the means of what we call today the “Anthropic principle.” Notice that all this happened in the 19th century, when there was no General relativity, and hence we lived in a Newtonian world. He said that, for all the matter in the universe we have a certain maximum value of entropy, what we call Smax. We know that if the system is in a certain entropy state, and is not increasing, then life is not possible. So system can’t possibly be in the maximum entropy state.

But there can be fluctuations, some small and some big. The fluctuations can repeat in time, so there is the recurrence Zermelo was talking about. Boltzmann stated that when there is a large fluctuation, i.e entropy goes down and then starts increasing again, this is the time when planets and galaxies and stars are formed. We are in such a fluctuation right now.

But there is a problem to this theory, referred to as “Boltzmann Brains”. We can’t live in such a fluctuation because such fluctuations are very rare. Say sometimes there is a small fluctuation and only the brain is formed, and with another fluctuation, only the body is formed. But to form a complete body, all these fluctuations have to happen at the same time which is extremely rare.

One more problem to this is, if it were to happen that all the fluctuations are non-interacting, we cannot sense any other human being or anything. But this is not the case, we can in fact sense things and human beings, so something is wrong with this theory and thus cannot be accepted.

There had been better solutions to come across but none of them had been so satisfactory. The 2nd law has been very important, and at the same time very confusing for us. Maybe someday, we will have a complete theory explaining all of this.

2nd law

Abel Prize: One of the Highest Honors for a Mathematician

3

The Abel Prize is awarded annually by the King of Norway to one or more outstanding Mathematician(s) and is dedicated in the memory of Norwegian mathematician, Niels Henrik Abel. The Abel prize recognizes the contribution to the field of mathematics which is extraordinary and of great influence. The first Abel Prize was awarded to a French mathematician Jean Pierre Serre in 2003.

Norwegian mathematician, Niels Henrik Abel after whom the prize is named
Norwegian mathematician, Niels Henrik Abel after whom the prize is named (Image: Britannica)

The Abel prize was proposed by Norwegian mathematician Sophus Lie when he came to know that Alfred Nobel plans for annual prize do not include a prize in Mathematics. Sophus Lie got overwhelming support from the leading centers of mathematics in Europe. However, the promises and support were tied too personally to Sophus that after his death the effort to establish the annual prize in mathematics got buried. In 1905 King Oscar II showed his interest in annual mathematics prize just as Nobel prize, but that too came to an end by the dissolution of the union between Norway and Sweden.

In 2001 the Government of Norway announced to establish Abel fund worth 200 million(NOK) and thus the first Abel was presented in 2003. It is often referred to as Mathematician’s Nobel prize and comes with a monetary reward of 7.5 Million Norwegian Kroner(NOK).

How are the Abel laureates Selected?

Since the Abel prize recognizes any outstanding work in the field of Mathematics so any worthy nominee(s) can be nominated. The right to nominate is open to anyone and the nominations are accepted till September 15th of every year. However, if nominated after the deadline then nomination would be considered for next year. It is important that the nomination should be kept confidential from  the nominee(s) and that deceased persons cannot be nominated. The nomination should be accompanied by the description of the work, CV of the nominee(s) and names of distinguished specialists in the field of nominee(s) to get their opinion.

Self nominations are not considered.

The members of Abel committee are nominated by International Mathematical Union and the European Mathematical Society to the Norwegian Academy of Sciences and Letters.  The Abel committee (consisting of 5 leading mathematicians Norwegian and Non-Norwegian from all over the world) then reviews the received nominations and recommends a worthy Abel laureate to the Norwegian Academy of Science and Letters. The committee members are appointed for 2 years by the Norwegian Academy of Sciences and Letters.

Based on the recommendations of the Abel committee the Norwegian academy of sciences and letters announces the Abel laureate(s) in March. The Award ceremony takes place in the University of Oslo and is presented by His Majesty King Harald V.

The First Woman to win the Abel Prize

In 2019, American Mathematician Karen Keskulla Ulhenbeck was the first women to be awarded the Abel prize. She was awarded for her pioneering achievements in geometric partial differential equations, gauge theory and integrable systems, and for the fundamental impact of her work on analysis, geometry and mathematical physics.

One of the famous contribution of Ulhenbeck was her theories of predictive mathematics inspired by soap bubbles. The curved surface area of soap bubble is an example of” minimal surface”, surface that forms itself into a shape that takes least amount of area. Examining how these surfaces behave helps researchers understand a wide amount of phenomena across wide array of scientific studies.

Karen Ulhenbeck, H.M. King Harald V (Image: Trygve Indrelid/NTB)

This Year’s Abel Prize Laureates

Gregory Margulis, Abel prize laureate 2020 (Image: Dan Rezetti)
Hillel Furstenberg, Abel prize laureate 2020 (Image: Dan Rezetti)

The 2020 Abel Prize has been awarded to Hillel Furstenberg from Hebrew University of Jerusalem, Israel and Gregory Margulis from Yale University, New Haven, CT, USA “for pioneering the use of methods from probability and dynamics in group theory, Number theory and combinatorics”.

In addition to Abel prize, the Norwegian academy of sciences also awards The Kavli prize in nanoscience and Neuroscience.

Check out the other posts by thehavok.com

Physics × Chemistry = Deciphering The Weird Quantum Glory Of Ultracold Gas

2

Physics, showing great potential in the quantum world and chemistry, revealing bizarre new possibilities for observation of quantum states study. Thinking about merging these researches, creates amazing new paths for study of such unimagined concepts. Researchers at UC Santa Barbara have now kept this imagination into motion, and deciphered the weird quantum glory of ultracold gas.

Researchers believe that this is not a minor breakthrough, but physicists have been trying for decades for linkage between quantum and classical physics. This study can possibly answer the disconnected areas of physics itself, like classical chaos theory, language for which is non-existent in quantum mechanics.   

Briefing what is Ultracold Gas: 

Simply, name defines itself. Gases or atoms maintained at temperatures close to zero kelvin are known as ultracold atoms or ultracold gases. To attain such a low temperature, several typical techniques are used and atoms are cooled to absolute zero temperatures. So why are scientists keen to study them?

Because, at such low temperatures, the quantum-mechanical properties of atom or molecule become vital. Such properties provide pathways for the study of fundamental particles, atoms and sub-atomic particles. In turn, provides foundation for the study of quantum chemistry, quantum physics, quantum technology and many more fields that can change the scenario of science and technology in the future.

Briefing what is ultracold gas
Briefing what is Ultracold Gas (image: physicsworld)

Also, such systems are capable of creating exotic phenomenon like Bose-Einstein condensates. It also can open doors for research on simulation of condensed-matter systems, which are normally difficult to observe.

Bloch Oscillations: Building up the story

Before we head down to strangeness of behavior of Ultracold gases, lets simply understand what are Bloch Oscillations. (believe me. this is needed to understand things throughout the article.)

This is basically the phenomenon from solid-state physics. This branch describes how, atomic scale properties of a material can influence large scale properties of a material. Bloch oscillations define how a particle will react when kept in varying electric potential, under constant force. Also, particle is not just in any normal conditions but, must be confined to a periodic quantum structure.

describing Bloch Oscillations
Describing Bloch Oscillations
Typical time-resolved simulation of pulse undergoing Bloch Oscillations (GIF: WIKIPEDIA)

So, Nobel Laureate Felix Bloch predicted that motion of particles kept under such conditions in a perfect crystal (as mentioned above), would be oscillatory.

However, this complex phenomenon is hard to observe in natural crystals, due to lattice defects. Interestingly, it has been observed in semiconductor super-lattices and in different Ultracold atoms in different ways.

Deciphering the topic and unwinding the strangeness:

While actually, these position-space Bloch oscillations were predicted nearly a decade ago, they were actually observed recently. For another bizarre, let me tell you something more interesting. In past couple of years, the concept of Bloch oscillations was taken to whole new level. By subjecting it to additional periodic force, scientists added time dependency on the oscillating system. Indeed, oscillations over oscillations – Super Bloch Oscillations – were discovered.

Bloch oscillations and super Bloch oscillations
Bloch oscillations and super Bloch oscillations (image: Hail Science)

So, how is this related here? Why I discussed Bloch Oscillations?

Basically, researchers here took the Bloch system to one more step further. They have tried to change and modify the space lattice, where the atoms of gas interact. This was done in a simple way. They changed the lattice by varying the laser intensities and external magnetic forces, giving atom system a time dependency. Not only this, they also curved the interacting lattice, which created a non-homogeneous force-field.

Doing this, the motion of oscillations was slower downed, which gave them chance to observe minute things clearly. Also, this opened way on observing what happens to Bloch system, when kept in non-homogeneous force-field environment.

Unwinding weirdness using ultracold gas lead to uncommon results:

While performing the above things, some uncommon unpredictable results were seen. When observing the system researchers noticed that, the atoms shot back and forth, sometimes moving apart, sometimes creating unusual patterns. This was all in response to pulses of energy pushing on the lattice in different ways.

“We could follow their progress with numerics if we worked hard at it,” researcher David Weld said. “But it was a little bit hard to understand why they do one thing and not the other.”

quantum shake
Quantum Shake- The puzzling Phenomenon
Because shaking a quantum system can do crazy things (image: UC SANTA BARBARA)

When they tried interpreting the complex physics, they just saw mess because there was no such symmetry amongst the behavior of atoms.

To find a solution to this, researchers eliminated a dimension, here time. This was done by adopting a mathematical technique to observe classical non-linear dynamics. The mathematical phenomenon they used was – Poincaré Conjecture

In this experiment, time interval was set periodically modifying the lattice system. What they did was, over-looked the in-between times, and observed the behavior once every period. Naturally, results were seen. Beauty of physics and mathematics showed up. Proper symmetry was observed in shapes of trajectories of those atoms. Observing the system only at periods, based on this time interval, yielded something like a stop-motion representation of these atoms’ complicated yet cyclical movements.

These paths exactly explained why in some system atoms get pumped, while in others they break the wave function and spread out.

Breakthrough and Future:

One of the major breakthroughs that could be possible from this study on ultracold gas, is using this knowledge to engineer quantum systems. Quantum systems that can show amazing new behaviors with its applications in the most flourishing fields of science and technology, quantum computing.

Another view of this study, seeks answer to the emergence and future of quantum chaos. Quantum chaos is the study, where chaotic classical theories can be explained in terms of quantum theory. This can make understanding of Chaos theory possibly easy. At the same time, can potentially show some relation between quantum and classical theories of physics.

quantum chaos
Understanding QUANTUM CHAOS can be key to understand the complexity of the universe
(GIF: PREDICT)

Most of us have heard of the famous Butterfly Effect. It is basic theory, which describes that, small change in one state of system, can result into large differences in later state. Thus, implies the dependence on initial conditions of the systems.

For information, this is part of classical of classical chaos theory.

Is this feature reproducible in quantum systems? Is it feasible? Feasible. Then to what extent?

It is really really puzzling to get same explanations in quantum theory from classical chaos theory. So, this may be a small piece of that part of research.

Confusion is a state of mind adopted to things either not understood, or not willing to be found.

Believe me; it is door to creativity of finding things which are obviously not obvious.

OSD

Energy storage: How far are we from a clean and green future

The world demands more power, preferably in a frame that’s clean and renewable. Our energy storage maneuverings are currently shaped by lithium-ion batteries, which unquestionably fall short for some applications, what can we look forward to in years to come?

 Let’s begin with some battery basics. A battery is an energy storing device with pack of multiple cells, each cell has a cathode (positive electrode), an anode (negative electrode), a separator, and an electrolyte. Using different chemistry outcomes and materials affects the properties of the battery – the amount of energy it can store and output, power it can provide, or the number of times it can be discharged and recharged (also called cycling capacity).

Scientists around the world are working on multiple energy storage technologies that could one day meet the energy requirements of mega-cities at just one tap. Here are some cutting edge discoveries involving diamond, radioactive materials, and organic molecules.

Diamond-based Nuclear Batteries: Newer and better approach towards energy storage

Researchers have been working on ways to turn radioactive material into an electric current that lasts for years. These batteries, called nuclear batteries, work on a principle called betavoltaics and they are powered by beta decay of radioactive material. Beta particles are just high energy electrons and setting up beta emitting material next to a semiconductor is theoretically meets the requirements to get an electric current in motion.

Diamond Nuclear Batteries Are Forever: newer and better energy storage option… Sort Of - Videos - Seeker
Diamond-based Nuclear Batteries:
Newer and better approach towards energy storage
[Image: Seeker]

These batteries have power outputs in very low ranges but they last until the material completely decays. Radioactive materials are known to have half-lives of centuries to millennia and this is what makes these batteries last for decades without any significant power loss.

Betavoltaic are different from the Radioisotope Thermoelectric Generators (RTGs) used by NASA for space missions. RTGs are powered by the heat of radioactive materials like plutonium instead of beta particles directly and are sometimes referred to as Nuclear Batteries. But Beta Voltaics promises a greater lifetime than so-called nuclear batteries. The first betavoltaic battery was developed by the Radio Corporation of America, in 1954, when it was considered a big leap in energy solutions. RCA imagined them being used in a wristwatch, hearing aids, and radios.

A magazine from 1954 mentioned this invention as revolutionary for energy storage and compared to that of the light bulb by Edison. While light bulbs are everywhere but a device running on an atomic battery is rarely found in general use.

Today betavoltaics have found main applications in deep space explorations and military affairs which is nowhere close to average consumers. Multiple factors are responsible for the non-ubiquity of these batteries but the major one is safety. Apparently, betavoltaics is safer than other nuclear power systems, but some materials can prove to be a serious threat, RC’s prototype from 1954, for instance, needed Strontium-90, exposure to which causes Leukemia.

Coating the battery with radiation blocking materials was not enough to hand off to markets and consumers. Nevertheless, in recent years several research teams looking for a way to safely harness the power from radioactive materials. Scientists from the University of Bristol have discovered a promising method to harness this energy.

Unlike the majority of electricity-generation technologies, which use energy to move a magnet through a coil of wire to generate a current, the man-made diamond is able to produce a charge simply by being placed in close vicinity to a radioactive source. Researchers have come up with a new prototype based on Carbon-14, a naturally occurring radioisotope found in the atmosphere and all life forms. It is also a waste material from a nuclear power plant. Their aim is not only to dispose such waste but recycle it to produce energy.

Also, Carbon-14 is not really hazardous radioisotope as in other elements have a rapidly disintegrating nucleus. This team isolated Carbon-14 and carried out processes to synthesize diamonds. Now this diamond is radioactive and could produce an electric current. Batteries based on such radioactive diamond would not produce enough energy to charge phones but enough to power smoke detectors, emergency signs, IoT Devices, jet engine sensors, deep-sea cable sensors and last for decades without being replaced. As diamond is very hard solid, the radiations are now wide-ranged and hence makes it safer.

Perhaps, the most impactful applications of this battery are for medical implants. Today we largely rely on Lithium-ion batteries, but those have limitations. The pacemakers installed after heart surgeries work on such batteries. With such innovation, we leap a step closer to a one-stop energy solution.

Vanadium Flow Batteries: Redefining ways of energy storage

Scientists aim at sustainably powering entire cities, and what makes us a step closer to making that happen is liquid-based redox flow batteries; capable of energy storage of lots and lots of kilowatts.

Vanadium redox battery  : newer and better energy storage option - Wikipedia
[Image:Wikipedia]

Innovations in energy sciences, these days, are so promising that harnessing energy from renewable sources like solar and wind power will be efficient and cheap. When It’s dark and not so windy, or the demand for power exceeds the output then we will have to return to the fossil fuels. What if we could store this energy in such a way that can be retrieved when needed in an efficient manner? Batteries seem to the solution to meet energy needs.

There are different battery types and all have their own strengths and weaknesses. The best battery type that might be capable of such efficiency is the one that uses flowing liquids called a redox flow battery. A redox flow battery is seen as a hybrid between a battery and a fuel cell. It consists of two tanks of an electrolyte: a positive and negative each. In between the tanks is a cell stack, where the positive and negative solutions are pumped and are separated by a membrane. Inside the cell stack, the ions in the negatively charged solution give up an electron, a process called oxidation.

Released electrons are picked up by an electrode in the cell stack that travels through the device connected to be charged before arriving another electrode on the other side of the membrane. Reduction takes place on the other side of the membrane where the ions in solution pick up the electrons. The positively charged hydrogen ions are set free and travel back across the membrane and maintain the charge balance. This is all that happens while some device is being run through such a battery, and the reverse of the above process takes place while charging.

But why is this technology being spoken when Lithium-ion batteries are here to run our small scale devices. Apparently, Li-ion batteries do not prove to be good for supplying power to the entire city and this is why redox flow batteries are trying to make its way for large energy needs. Lithium is not an abundant metal, so as to build a large battery. Other setbacks like degradation with time, and loss in capacity to hold charge also count for its inapplicability.

On the other hand, flow cell batteries have qualities that make them perfect for such large scale power storage. The scalability of such cells can be simply increased by using larger tanks of electrolytes. Degradation of the cell, in this case, is not as significant because the electrolyte can last for more than 5000 charge cycles. The only thing that is stopping us from using these is rise and sustainability. The most widely used metal used in flow batteries is Vanadium because it charges and discharges reliably for thousands of cycles. Again, Vanadium is not very abundant in the Earth’s crust and if these batteries come in the mainstream, the prices would reach the sky.

Researchers have tried replacing Vanadium with organic molecules, but those tend to decay and need replacement every few months. If the solutions in the cell used are very acidic or basic then it poses damage to the pumps and leads to hazards leaks. But scientists are undeterred can continue to find a solution until they found one. Recently, scientists at USC claim to have discovered organic water-based redox batteries that last for 15 years and costs one-tenth of a Li-ion battery. The battery that they have made is enough for the basic electricity demands of a single house, but their goal is to make electricity available at one tap for the entire mega-city.

One day in the near future we would see more promising discoveries playing a vital role in running green energy grids around the world.

Read how these Energy Storing Bricks will replace even the batteries in near future for energy storage. Click Here to explore.

“X17” – Could The New Hypothetical Particle Be Linked To Dark Matter?

1

The universe is a mysterious place. Every now and then, it fires an inconsistency at us and sits there laughing, ridiculing at our inability to comprehend it. Man is a retard, who even after being mocked by the universe, sets sail on a journey to further his knowledge, only to end up crashing into another peculiarity, sinking his Titanic of exploration. Once he resolves a problem and begins the journey again, a new anomaly shows up and spoils all the efforts of years of head-scratching. The Atomki anomaly is a new queuer in the world of anomalies.

The Triple Alpha Process and the Atomki anomaly…

It’s often said that anomalies and inconsistencies are the keys to understanding the universe in physics. If it were not for the discrepancy in the observed energy of the electron in beta decay, the neutrino would never have been found; or if there was no inconsistency in the internal structure of composite particles, quark model would never have been proposed. Thus usually it is the puzzle itself that poses the solution to the problems in physics.

The existence of Carbon in this universe was a puzzle to physicists. This is because, from any of the nuclear reactions that could possibly take place in any star, there was no way of getting to carbon. But in the 1950s, Fred Hoyle put forward a brilliant solution. He postulated that in extreme conditions that are present in the stars, two alpha particles (Helium nuclei) could combine to form the unstable beryllium-8 nucleus. But if another alpha particle could combine with the beryllium-8 nucleus before it decays, carbon atom could form. This carbon atom would be in a resonant excited state, which could decay back into the carbon atom that exists in the universe today. This process is called the triple-alpha process. This process is the very reason for our existence. All the life forms present on earth are grateful to this marriage of helium-4 and beryllium-8 for their being.

The triple alpha process
The triple-alpha process that ends up in forming the carbon that is present in the universe today.
[Image: Ethan Siegel]

In 2015, a group of Hungarian physicists at ATOMKI (The Institute for Nuclear Research, Hungarian Academy of Sciences) set to study the short-lived excited state of beryllium-8, which was one of the essential ingredients for the formation of carbon. To do so, they fired a stream of protons at a sheet of lithium-7 to form unstable beryllium-8. This nucleus would decay into two helium-4 nuclei along with a high energy gamma photon. This photon has enough energy to form an electron-positron pair. From the known energy of the initial photon, it is possible to find out the possible angles between the formed electron and positron. Statistically, the number of decays is maximum at 0° and then goes on decreasing until reaching a minimum at 180°. This is the theoretically predicted value.

pair production in cloud chamber
The electron-positron pair production as observed in a cloud chamber.
[Image: Greatians]

But what the Atomki group observed, has proven to be a new piece of the puzzle that does not fit anywhere on the board. They observed a spike in the number of electron-positron pairs forming at an angle of 140°, which was unexplainable by the standard model. No such spike should have existed if all the existing theories were to satisfy. This anomaly is known as the Atomki anomaly and is still perplexing the physicists all the same.

anomaly at 140°
The spike observed at 140° for different Energies of protons.
This cannot be explained by any existing theories.
[Image: Observation of Anomalous Internal Pair Creation in 8Be: A Possible Signature of a Light, Neutral Boson]

How X17 enters the game…

Calculations showed that if there existed a new subatomic particle of mass 17 MeV, then this anomaly could be explained. They postulated that the unstable beryllium emitted this particle which in turn decayed into electron-positron pair, and this was the reason why a spike was observed at 140°. They called this particle X17, where 17 stands for its mass. This hypothesized particle is 35 times heavier than the electron. The most bizarre fact about this particle is that it is protophobic- which means it does not at all interact with the proton although it interacts with the electron and the neutron. And this would be one of the prime problems in trying to reconcile the particle with the standard model. What is more interesting about this particle is that it is a boson. And bosons, as you know, are the force carriers of nature. Existence of a new boson would simply imply that there is another fundamental force, the fifth force in nature mediated by the X17 particles.

computer simulation of new particle
The graph shows the result of computer simulations where new particles of 3 different masses were introduced into the system. It can be observed that for a particle with mass approximately 16.6 MeV, the result of the actual experiment could be reproduced.
determining the mass of the hypothetical particle
Determination of the mass of predicted particle by correlating the experimental data with the results of various simulations

[Image: Observation of Anomalous Internal Pair Creation in 8Be: A Possible Signature of a Light, Neutral Boson]

This particle recently gained attention because the team at Atomki devised another alike experiment. This time considering the decay of unstable helium nucleus. Again a similar anomaly was observed, but this time the angle was 115° instead of 140°. They backtracked the experiment and rewrote the calculations considering the different isotope of helium taken, only to find out that again, the same particle X17 could explain the anomaly with helium too. This is the reason the particle has stirred up the scientific community. 

X17 and Dark Matter.

Dark matter is an open question in physics. It is something that the standard model has not yet incorporated, and also not much is known about it. Anything unexplainable that turns up in physics naturally points toward this. The X17 more particularly points towards dark matter because several dark matter models have predicted the existence of particles in the mass range of 10 – 20 MeV. And also since this is a boson, it could be interpreted as the force carrier for the interaction of matter with dark matter. Also, the fact that this particle was never observed until now, and was always elusive, could mean that this could somehow be related to the dark matter, which too somehow has evaded detection. There is a stronger hunch towards the X17 being the dark boson because calculations have shown that lightweight dark bosons should be able to decay just the way the decay was observed by the Atomki team.
If the particle is found to be somehow linked to dark matter, then it will be credited for solving the problem regarding 80% of the matter present in the universe.

dark matter illustration
Artist illustration of dark matter.
[Image: University of Birmingham]

Evidence and Criticism

CERN’s NA64 experiment has also attempted in performing a modified version of this experiment, firing electrons at a fixed target, but failed in finding the particle. This would make a place for criticism about the existence of this particle. Also, this is not the first time this group of researchers have reported evidence for new particles, they have also done it a few times before, only to withdraw their claims later. Also, the fact that only this specific team has seen a discrepancy tightens the possibility of this particle being a false signal due to a faulty setup. Right now, all eyes are set on the Large Hadron Collider’s beauty experiment (LHCb) which is going to either prove or disprove the presence of this particle by 2023.

LHCb experiment
The LHC beauty experiment at CERN
[Image:Sci-News]

It is the natural tendency of physicists to jump to extreme conclusions with the excitement of Eureka, and this too might be one of such conclusions which is fated to become a failure. But in case this particle is proved to exist, it is a true twist in the story and will prove to be game-changer in the plot of physics.

The important thing is to not stop questioning. Curiosity has its own reason for existence. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvellous structure of reality. It is enough if one tries merely to comprehend a little of this mystery each day.

Albert Einstein

Carbon-Neutral fuel: Sensational artificial photosynthesis that changes the world

2

Artificial photosynthesis, sounds fictional. Let’s start with some reality we all are facing. Fossil fuels, the exhausting source of our energy needs is deeming soon. Researchers, industries and people have begun ways to find alternative to them since past couple of decades. Many promising technologies and innovations were also presented, which can potentially replace fossil fuels. Yet the demand of world for more supplies which can be even more sustainable and efficient, needs to be fulfilled.

One of the ways proposed, in the past, was using solar energy for generating resources to fulfill those needs. But, convertibility of solar energy directly into resources was not that feasible and was also not much successful. This is because it needs large space and also needs proper arrangement of panels for getting better results. Still, it is used in many countries to produce resources and is considered successful to some extent.

Researchers at University of Cambridge, have done great study on the other hand. They have developed a stand-alone device, that can produce cleaner fuel which is carbon-neutral and completely sustainable. They have used the concept of artificial photosynthesis and made that device, which uses sunlight, carbon dioxide and water to produce such carbon-neutral fuel.

The ‘Artificial Leaf’ and Artificial Photosynthesis:

Synthetic-Gas (Syngas), is a mixture of hydrogen, carbon monoxide and little carbon dioxide. It is widely used in different areas like, fuels, pharmaceuticals, plastics and fertilizers. Many of the substances and everyday commodities are using syngas in them and we are using those things without knowing it. Most of us have never heard of it, and consume products that were created using it.

But researchers have demonstrated now that, artificial leaf can potentially produce this syngas in a simple way that is sustainable. Basically, on the artificial leaf, there are two light absorbers which replicate the molecules in the plant leaf that harvest sunlight. These absorbers are combined with catalyst made from naturally abundant element, cobalt. Absorbers are made from the material containing BiVO4. Also, they used perovskite, in making the structure, which is mineral of calcium titanium oxide (CaTiO3).

artificial leaf that performs artificial photosynthesis
artificial leaf (Image: University of Cambridge)

Inspiration from Mother Nature:

Working of such a leaf is simple and clearly mimics the photosynthesis process occurring in plants. When this device is immersed in water, one of the two light absorbers uses the catalyst to produce oxygen. While, the other absorber performs chemical reaction, in which it reduces carbon dioxide into carbon monoxide and hydrogen. This, on happening simultaneously forms syngas mixture. Using perovskite, as mentioned earlier, provides them with high photo-voltage and electric current to power the chemical reaction. This catalyst combination is much better compared to silicon or dye-sensitised material. Main reason of choosing cobalt over platinum or silver, is that, it is better at producing carbon monoxide. At the same time, cobalt is cheaper than other commonly used materials.

simple understanding of artificial leaf for carbon neutral fuel
simple understanding of artificial leaf (image: NewScientist)

As an advantage, researchers stated that, these absorbers can even work efficiently in low light and can perform well. This means that, such a device can work perfectly on a rainy or overcast day. This opens up its usability to countries which are not warm and can be used in the months other than that of summer.

One step ahead: Making storable fuel directly

Using artificial photosynthesis, makes syngas. After that it is converted to liquid fuel which are useful later on. The team had now to look for ways to use that technology in making alternative of petrol by making sustainable liquid fuel.

Researchers recently presented ‘photosheet’ technology that directly converts CO2 and H2O into formate (HCOO_) and O2. This is a potentially scalable technology for utilization of carbon dioxide and stop pollution occurring via extensive use of CO2.

newly developed photosheet technology to obtain carbon neutral fuel
newly developed photosheet technology (image: NEW ATLAS)

Artificial leaf used components from solar cells. This latest device made by them rely completely on the photo-catalysts embedded on the sheet and produced so called photocatalyst sheet. Which, doesn’t require components used in artificial leaf. The sheets produced are made from semiconductor powder. Such materials can be made in large quantities and are also cost effective at the same time.

This newest technology is robust and synthesize more cleaner fuel which is easier to store. It has also potential of making fuel products at scale. The test unit which was prepared by them was 20 cm2 in size and can be easily scaled up up-to several square meters. Also, the formic acid (HCOOH) produced in later reactions, can easily be accumulated and used chemically convert it into different types of fuel.

Success and Further Growth:

Success of this technology is unmatched and its unparallel growth can change the future. The chemical reaction occurring in photosheets, which reduces carbon dioxide to formate, has selectivity of 97±3% and Solar-to-formate (STF) efficiency of 0.08 ± 0.01% using water as electron donor. It also shows high stability during the photosynthetic reaction without sacrificial reagents. At the same time, enables scalability of the photocatalytic activity.

chemical energy from artificial photosynthesis
Chemical Energy obtained from artificial photosynthesis (image: ResearchGate)

The wireless device could be scaled up and used on ‘energy farms’ similar to solar farms. Such farms can be helpful in producing clean fuel using sunlight and water. It was challenging to produce this type of clean fuels without unwanted by-products. Still, surprisingly, researchers claimed that this device produced zero number of by-products and completely proved to be highly selective.

Currently, cobalt-based catalyst which coverts carbon-dioxide is easy to make and comparatively stable. With idea of scaling-up the device, the efficiencies still need to be improved. For this, experiments are conducted with different range of catalysts to improve both, stability and efficiency. Also, with such explorations, possibility to get different solar fuels is seen and experiments are carried out for the same too.

Thus, with concept of artificial photosynthesis, researchers developed artificial leaf, and in-turn developed wireless device that can perform such chemistry without using complex components and electricity.


Why to dig deeper into the crust of mother nature, for the matter that can be made available from the flurry of air.

–OSD

3D Printing: Technology That Can Save The People From Coronavirus

3

The World and its Economy suffered a major setback when ‘WHO’ as on 11 March 2020 declared Coronavirus as a Global Ongoing Pandemic. Many of us wanted to know the implications of this Pandemic on various industries. But the Logistics Industry was the only one that was plunged into a crisis that was yet to unfold. The Disruption in manufacturing and production, together with the shutdown, constrained supply chains, giving rise to critical shortages of Medical Equipment.

Nurses forced to wear garbage bags amid the shortage of PPE kits
Nurses in US hospital are forced to wear garbage bags as PPE because of shortage of latter. [Image- NYPost]

With the advent of government guidelines amid an ongoing crisis, there was a drastic increase in demand for medical essentials. An industry emerged to counter the Supply-Demand Misbalance along with Local help to achieve Extra-ordinary feats. Additive Manufacturing Industry termed as ‘Industry 4.0’ upgraded its capabilities to join the fight alongside the Healthcare sector against Novel Corona Virus. 

Beginning of The Crisis

After the outbreak, there was a dire need for medical tests that are as much as reliable as rapid. Mainly two tests were approved for the detection of the virus; The Complete Blood test and the other is the Nasopharyngeal Swab test. The latter being rapid as well as simple to use, is deployed throughout the World.

nurses strike over shortage of PPEs
Nurses protests in front of hospital after PPE shortage endangers their safety [Image- Guardian]
shortgaes of medical supply in US hospitals
Masks, ventilator parts, pumps, and many types of equipment that are not available in areas of major outbreak. [Image- Washington Post]

The Nasopharyngeal Swab is like a long pipe cleaner but with a very soft brush on the end. The personnel conducting the test would insert the swab from the base of your nose to touch the back of your throat. The soft bristles will collect a sample of secretion for analysis because the cells and fluids have to be collected from the entire passage for a good sample. Needless to say, the swab is a little bit uncomfortable, but it’s part of a procedure. I mean which medical test is ever comfortable!

after swabs fall short, 3-D printing companies started mass-printing them
3D printed Nasopharyngeal swabs developed by Carbon Company. [Image- Carbon3d]

The problem began when the Nasopharyngeal swab’s demand sky-rocketed overnight. At that instant, there were not 10, nor 7 but only Two major companies that mass-produced them. One of those companies is Puritan Medical Products, based in Guilford, Maine. The overwhelming demand was tackled by ramping up the production to 1 Million swabs a week. Then the too U.S was facing a shortage, let alone the whole world.

Rising From The Ashes

The Lockdown was a big blow to all the existing Industries. After the disruption of airways, railways, and road transport systems, the production sector succumbed to high pressure and vast needs of Medical. The whole scenario was disturbing enough to see. An immediate response was needed from the locals due to the disruption of Logistics. That’s where these ADIs came into the field as a substitute but became much more. Highly encouraged people formed a network within days to save the lives of millions of people rekindling the Torch of Mankind. 

3-D printed carbon face shields that are efficient as traditional ones.
3D printed Carbon face shields that are as efficient as traditional ones. [Image- Carbon3d]

To cope with the challenge of providing the latest technology, and to save the lives of several people, ADI decided to create a highly de-centralized authentic network. Many companies, hobbyists, and enthusiasts formed an open-source platform of 3D printers to cater to their needs. Companies like Carbon™, FormLabs, and Issinova accepted the challenge and started to make various designs for a wide range of products. While many made certain progress in upgrading the technology like the TiO2 nano-fiber mask, 3D printing allows production and good quality within your reach.

3-D printed isolation wards
3D printed isolation wards to combat the overpopulating hospital OPD wards. [Image- Winsun3d]

The companies worked together with doctors, engineers, designers to work out the best designs suitable for different types of equipment. The main objective was to develop such products that would match all the required criteria and guidelines set by ‘WHO’.

Tapping The Hidden Potential of 3D Printing

The digitalization especially the cloud facility helped the De-Centralized network to become more robust. Not only local volunteers with 3D printers but engineers and workers all over the world offered their help. The key goal was to make an ergonomic design with the help of various software.

So a company called Carbon™ picked the challenge. They made designs, improved them, tested them, and began to mass-produce them all in 20 day time. Their breakthrough technology DLS uses high-quality resin combined with a texture like finish that can outperform any existing products developed via the powder bed technique.

The 3d printing breakthrough carbon machine
The Carbon 3D printing machine. 1) Build platform 2) Liquid Resin 3) Oxygen permeable window 4) Deadzone 5) UV light processor [Image- Carbon3d]
Video of continous liquid interface platform
The heart of process- Deadzone. [Image- Carbon3d]

The DLS employs light projection, oxygen-permeable optics, to produce exceptional parts. The method called CLIP here uses a UV light machine to project cross-sectional parts of the model encoded in UV light at the liquid resin. The resin solidifies when exposed to UV light and oxygen permeates through the filter. The resin on the sides rushes to take the place of solid resin creating a continuous flow of liquid. (Know More about CLIP)

The RP95-M mask
The RP95-M mask developed as a result of collaboration of 30 companies. [Image- Winsun3d]
RP95-M mask
RP95-M mask offers highest FPP3 protection in its class. [Image- Winsun3d]

The 3D printing sector also helped hospitals by printing and assembling whole Isolation wards for patients. Many innovations were also made in the area of resin, mask filters, and other products. Recently CTU made a new mask called “RP95-M” which offers the highest degree of protection in the FPP3 level.

Embracing The Challenges

The critical point for this industry was to make the consumers believe that the products they were using were tested and performed equally, or in some cases better than the traditional ones. Another challenge they faced was to ensure that the STEP/STL files were not tampered with. Copyright Infringement was a serious issue they faced since they had to share or upload files to reach more people.

The Core idea of the setup was to help the most affected areas. The FDA also issued an “International Risk Classification system” that provided regulations and ensured the performance and safety of devices. The new designs are approved after keratin tests and new materials are certified for the benefit of 3D printing.

Future 4.0

In the wake of the Covid-19 Pandemic, ADI played a major role in solving the crisis and sprang to action when the world was in a state of emergency. The inherent flexibility and possibility of the 3D printing industry also provide a base for a cleaner and greener future. Advances in developing new materials can make them bio-degradable and tackle the issue of disposal of medical waste in one shot.

disposal of medical waste is currently a main concern for experts.
Used N95 masks are collected at Massachusetts General Hospital in Boston on April 13, 2020. Hospital staff wrote their names on their old masks so each can be returned to their original user to ensure the best fit. As part of a bold initiative involving hospitals across the state, MGH on Monday began distributing thousands of freshly decontaminated N95 masks to health care workers after the equipment went through an elaborate cleaning process at a site now up and running in Somerville. The treatment takes place inside a giant decontamination machine owned and operated by the Ohio nonprofit Battelle. Hospitals plan to use the machine to alleviate critical shortages of respirator masks for workers battling the coronavirus pandemic. At its peak, the system will be able to treat up to 80,000 masks per day a prospect hailed as a game-changer for the region. (Image- Bostonglobe)

Thanks to its supply-on-demand ability, and digitalization feature it comes very handy whenever needed. Post Pandemic supply chains are expected to be more fragmented, changing the scenario of manufacturing with more investment in the ADI sector. There is no doubt that this industry will be a gold coin for others in the post-pandemic, cyber-physical age.

The Secret of Change is to focus all of your energy, not on fighting the old, but building on the new!

Socrates, Philosopher

A New Anomaly – Endangering our conception of Universe

0

The physics world as of today is in chaos. Theories, which were once glorified for their accurate explanations, are now battling the anomalies that are being fired at them by the universe. Be it the fine structure constant or the magnetic dipole moment of electrons, with newer ways of observations comes newer predictions, which demand a drastic change in the existing explanations. This time it is the very famous Hubble constant which is playing its game.

Hubble constant, what is it?

In the1920s, Edwin Hubble, who had just found a new way to estimate the distance of cepheids (special stars whose luminosity rises and falls regularly) from Earth, noticed that everything in the universe moved away from Earth and the speed at which they were moving out increased with their distance. That means that farther away an object is, faster it would be moving. This would mean either of two possibilities – Earth was the centre of the Universe, or the entire Universe was expanding. Hubble theorized that it was the Universe expanding, and gave the very famous constant which still bears his name.

A cepheid in M31
[GIF:AAVSO]

The constant problem with the constant…

Ever since Hubble gave his constant, it has always been a topic for debate. This is because Hubble has errors in his data and had derived the constant which was approximately 10 times the actual value. And ever since then, new methods have been devised to accurately predict the Hubble constant. And to everyone’s surprise, every method gave entirely a new number. No two methods had matching readings. 
One method is to work on the existing datasets on cepheids and accurately calculate the constant. It has been worked up and the value came to be 50,400 mph per million light-years (73.4 km/s/Mpc). 

anomaly
The inset image shows the cluster filled with cepheids which can be used to measure the Hubble Constant
[Image: Space]


Astronomers also devised a new method to determine this constant – the cosmic microwave background, CMB. CMB refers to the radiation emitted during the Bigbang. This has all the information about the initial universe. The Planck satellite of the European Space Agency – ESA has spent 10 years in gathering data about the CMB. The calculated value of the constant from this data comes to be 46,200 mph per million light-years (67.4 km/s/Mpc).

The Cosmic Microwave Background as observed by Planck satellite.
[Image: ESA]

These two numbers may look close, but there is a very huge difference, even after considering their error margins.

If the value calculated from cepheids is correct, there is something entirely wrong with our perception of the universe. This would mean that we need to introduce new and exotic physics into our models of the universe. 
If the value from Planck satellite is correct, then astronomers have been always calculating distances in space entirely wrong.
Either of the two possibilities is scary and requires us to change our perception of the universe.

A possible explanation for this anomaly…

There is one more thing this could mean. The universe’s expansion may be accelerated, and the universe is expanding faster now than it was earlier. But this would still require radical transformations of our theories. Many possible explanations have been provided for this anomaly, let’s have a look at few of them.

1.Decaying Dark Matter:

Dark Matter is one of the fundamental ingredients of the universe, but very little is known about it. So the very first thing to get tampered is obviously this. One of the theories proposed says that the dark matter particle decay into a lighter particle and something called the dark photon. This causes the gravitational force exerted by the dark matter to decrease with time and hence accelerating the expansion of the universe. 
But the very important drawback of this theory is that for explaining one discrepancy, we are introducing two new unknowns, the lighter dark matter particle and the dark photon. And this is the reason why such a theory makes physicists uncomfortable.

Artist’s impression of Dark matter.
[Image: EarthSky]

2.Variable Dark Energy:

Dark Energy is another mystery which was introduced to resolve the existing mysteries of accelerated expansion of the universe. Theories constructing models for varying dark energy could explain this discrepancy. They do this by stating that the excess of energy present in the early universe provided pressure required to accelerate the expansion. And that is the reason why we see an accelerated expansion today. 
But again no perfect model is present for the dark energy, thus any changes would again lead to much more discrepancies.

3.Modified Gravity:

In the standard method, all of the universe’s energy and matter content is fed to Einstein’s equation and the universe’s fate is determined. But some people have tried to modify the recipe itself instead of modifying the contents of the recipe, i.e. tried to modify the theory of gravity itself. William Barker, PhD student at the University of Cambridge believes that a theory of modified gravity could help in explaining this anomaly. The modified gravity could behave as if the universe had excess radiation in the early stage which added to outwards pressure, accelerating the expansion of the universe  In the preprint, the authors acknowledge that the model requires more analysis.

This anomaly could be an inadequacy in our observations, or it might be a flaw in our theories, all we can do right now is wait and watch what will be the fate of the universe. But cosmology never stops entertaining us, nobody knows what the universe has in store for us. Anything could be anticipated any day and no surprise is really a surprise because uncertainty is the only certainty in the universe.

Water: A strange but life-supporting liquid

“When heated to the state of steam it is invisible but has enough power to split the earth itself.”

Bruce Lee

Bruce Lee, the most prominent martial artist of all time, once narrated an episode from earlier days when he was a budding martial artist. While sailing through the waters, in his young days, Lee thought of his past training and got mad at himself punched the water! After punching he suddenly realized that water is the essence of kung-fu, which he wished to attain by all sorts of training. Further, he narrated that even after being struck water had no pain and he failed to grab a handful of water. The seemingly weak water that could fit in the smallest of all containers has the strength to shrink the hardest of all things.

With these observations, he found out the inspiring side of the water and its nature. This was a notion of water understood by the apex martial artist while the scientists around the world see to it as “Least understood material on Earth”. Let’s investigate the ice cubes that you pop from ice and see how they are different from the other forms of ice found on earth and other places in the universe.

Ice with Exceptionally Low Density

Water is essentially only the liquid that has a less dense solid form, but this is not the situation with other forms of ice ascertained at diverse places on Earth and stellar space. Ice exists in more than 17 different forms and has created many misapprehension.

Most of the phases of ice form under different conditions of pressure and temperature. Widespread explorations have been conducted to examine the effect of positive pressure which has a predictable result: With the increase in pressure, the density of ice also increases. However, the effects of extreme negative pressure on water molecules are not known. With the aid of molecular dynamics, researchers have theoretically discovered a new class of ice phases and named them aeroice.

This phase of ice has the lowest density among all known ice crystals. With this discovery perception of fundamental properties and behavior of water when restricted to nanotubes and nanopores can be explained.

In 2014, the Ice phase that forms under negative pressure was discovered by researchers of Okayama University in Japan. This phase (16) has a 3-D crystalline structure forming a zeolite arrangement. This cage-like structure captures neon in the void space and when the neon is removed it is observed that the molecule is stable and the ice phase has an ultra-low density at extreme negative pressure. Owing to the similarity between the crystal structure if Silica (SiO2) and Ice (H2O ), 300 possible structures were found for which scientists zoomed in through 200 silica zeolites from the database.

The approach can be summarized in a few simple steps: First from the structure of SiO2 oxygen atoms are removed and each Si atom is replaced with an Oxygen atom. The next step of adding hydrogen atoms completes the structure of this ice phase. The density range of this ice is almost half of water or could be close to 0.5 g/cm^3 and among other phases of ice which have zeolite like structure, this is the most stable one.

This research not only interests academic people but also to the scientists who work in different fields. This research can pave the way to an understanding of the behavior of water when confined to nanotubes and nanopores. It would help researchers who aim to colonize other off-world places

One Dimensional Ferroelectric ice

 In a new study, a team of chemists has developed a new method for synthesizing a type of ferroelectric ice, which is crystallized so that all of its bonds line up in the same direction, producing a large electric field.

Every water molecule has a diminutive electric field. But the random arrangement of water molecule while freezing causes the electric field to cancel out as the dipoles face in different directions and the ice’s total electric field nullifies. In contradiction to normal ice, the bonds in ferroelectric ice point in the equivalent direction at low enough temperatures, which causes polarization and produces an electric field.

Ferroelectric ice is deemed to be extremely rare. Scientists are still examining whether or not pure three-dimensional ferroelectric ice subsists in nature. Some researchers are working to find evidence of the existence of ferroelectric ice on Uranus and Neptune. Creating pure 3D ferroelectric ice in the laboratory is not feasible, since it would take an estimated 100,000 years to form without the assistance of catalysts. As of now, all ferroelectric ices designed in the laboratory are less than three dimensions and in heterogeneous phases.

Ferroelectric ice | Nature | water
The structure of normal hexagonal ice Ih, showing the arrangement of oxygen atoms and one of the many possible random arrangements of hydrogen atoms  (Image: Nature)

Researchers designed a very thin nanochannel that holds just 96 H20 molecules per unit cell in order to design a water wire. This 1-D arrangement of ice exhibits large dielectric anomalies on lowering the temperature from 350 K to 177 K due to the phase transition. What amazed scientists, even more, is the higher boiling point of nanoconfined water than normal, and still the reason remains unknown.

The hydrogen bonding interaction among the H2O molecules of water wire and nanochannel has affected the ferroelectric property of water. The hydrogen bond remains intact in nanoconfinement and other hydrogen atoms rotate in the influence of the electric field. A property not observed in normal water is that reversing the externally applied electric field polarity of ferroelectric ice also reverses.

Overall, the production of a 1D, single-phase ferroelectric ice using water confined to a nanochannel provides a new way to synthesize ferroelectric materials. The new method could also help scientists better understand the unique properties of ferroelectric ice, which could have applications in the biological sciences, geoscience, and nanoscience.

As Zeng noted, ferroelectric ice could potentially have electrical applications, with the efforts of engineers working in nanotechnology. A new way of synthesizing ferroelectric materials can be devised from this single phase ferroelectric ice using water confined to a nanochannel. With the efforts of engineers working in nanotechnology, ferroelectric ice can have applications in electrical, biological, and geosciences.

Read about black hot form of ice and two different phases of water in our previous post only on thehavok.com

Chaos Theory: Do we really know what we think we know?

6

I can see that the sky is blue, the leaves are green, and a car passes by the window. But, would it have been the same if the things were only slightly different yesterday, say, there was just one extra gust of wind in the night? Is it possible that this would change things drastically?. The sky would’ve no longer be blue but dark reddish-brown indicating the color of war. Instead of car soldiers carrying guns would’ve passed the window, all due to just one extra gust of wind.

But we know that this is not the case, this is a lot crazier than it seems. But, is it so? Is it really crazy? Or is it true that just a small gust of window from yesterday could change the present so drastically that it no longer looks like the present? But this can’t be crazy, is it?

Let’s have a live example. Last year things were going on smoothly. But one day we got the news of a global pandemic, and the world has never been the same again, and we no longer know what will happen by the end of this year.

Sorry for such a long introduction – I just got carried away because the topic we are going to deal with is a very interesting one. Today, we’ll be dealing with “chaos theory”.

Many years ago, Newton gave his view of the world which was deterministic. He gave some simple equations, solving them and specifying the initial conditions would allow us to predict exactly what will happen tomorrow or even after 10 years. But that is not the case.

Let’s see where the problem is. The physics, or even science in general, that we study is based on perturbation theory. In this theory, we assume the simplest possible solution of any case and add the necessary terms to it to make it most approximate to the real solution.

But things are a lot more complicated than it seems. This doesn’t mean that the world is completely indeterministic, but the science we study is completely vague and there is a limit to the approximations we work on.

Let’s have an example. Suppose we have a differential equation that can be used to determine the weather. We just need some initial values which we plug into the equation to solve it. Suppose that the initial value we want to take is the air pressure and the temperature of a certain region.

Now assume that when you are measuring the air pressure, a butterfly flies past through, and the measured value has an error of 10-8. The error doesn’t seem so much to us. Such a small error shouldn’t be able to bring a tornado in Japan.

But what happens is, due to such a small error in the measurement, we get a very huge change in the result of weather even 7 days ahead. Such sensitive are the results of the non-linear differential equations to the initial conditions.

These equations are so sensitive that even the smallest changes in the initial conditions can bring drastic changes in the upcoming future predictions. The chaos theory was first brought to light by Henri Poincare. He was the one to put an end to Newton’s deterministic world.

Once, there was an MIT professor and a meteorologist who was trying to determine the weather from a month. He put some initial values into the computer and went for a coffee. But when he came back and saw the result,  he was shocked. He put the results into an oscilloscope and saw the following

Instead of a single line, He saw that as time progressed, the line divided into two and then four and then it was all chaotic that we cannot even tell the branch from which a particular line was emerging.

This all happened because he did a mistake in putting in the initial value with an error of 10-3. This effect is known as the “Butterfly effect” and is the heart of chaos theory. This law is known as “law of sensitive dependency” which says that the laws are very much sensitive to initial conditions.

Now let’s take an example of a double pendulum. It behaves something like this.

(GIF: SciPython)

Now if we take the two of them together, the motion would be so random that even the brightest of minds would not be able to tell what will happen next.

Lorentz created and derived a differential equation to model convective fluid flow and when you graph it, it looks something like this.

This is what we call a “Lorenz Attractor”.

It starts spinning initially, there is nothing unpredictable about this. But as time proceeds, it becomes more and more chaotic. This is the reason we are able to predict the accurate weather only up to 7 days, and precipitation only up to 3 days. How we determine the weather today is to take 18-20 initial conditions and take the average of that.

A Mathematician once stated the chaos of nature using the example of a pool table: “During the familiar game of pool, if a man is to calculate the collision between the balls, the prediction of the first collision is simple enough that any college student can do it.

The prediction of the fifth collision requires such things as the gravitational attraction of the two people standing nearest to the pool, while the prediction of the ninth collision is impossible, as it requires exact knowledge of all the position and momentum of all the particles (electrons, protons, and neutrons) in the Observable Universe.”

What will happen we do not know for sure, but one thing we know for sure is that we no longer live in a deterministic world of Newton. We can most likely tell how the planets would move after 1000 years, but we cannot tell about the wind which will blow after 1hr. Everything is chaotic and predictability is far from our grasp.

Talking about our human lives, is it finishing deterministic differential equation or is it an a chaotic motion going to be chaotic any moment or it is a or is it a divine push from god, it is for you to decide.

Read other articles related to Science only at thehavok.com

Energy Storing Bricks: Redefining The Ways Of Power Storage

5

Bricks; simple substance and building block of elementary constructions. From past thousands of years, this block has been utilized for the construction and rarely thought to be of any other use. After using once to build something, it becomes strong and powerful and can never be used again for any other purpose. Unimaginably, now this fundamental unit of construction is revolutionised by the chemists of Washington University in an incredible manner. They have made amazing Energy storing bricks that could be best solution of all our power storage needs.

Before getting further to fancy things about the newest technology, lets know what is the basic idea of it.

Basics of Energy Storing Brick:

Red brick, which is mostly made under fire and utilized throughout the history has played vital role in construction and real estate industry.

Developing and learning from the history, and combining it with modern technology can create incredible things.

–ANON

Chemists have done the exact, and have made ‘smart bricks’, that can store energy. Energy in them can be stored until it is utilized from it and used for energizing the power requirements of electronic devices.

A simple brick is made from mixture of different compounds like

  • Silica (sand)
  • Alumina (clay)
  • Lime
  • Iron oxide
  • Magnesia

All in different proportions, as per the needs.

basics of energy storing bricks
How they developed the idea 💡 (Image: https://cdn.arstechnica.net/)

Many architects have accepted that common bricks have a property of capturing and storing the Sun’s heat in a great way. Also, walls and buildings made out of bricks and cement occupy large amount of space and is underutilized in a way. So, chemists thought that why not give them an energy storage purpose in addition to just occupying space.

Implementing this next level thought was bit challenging, yet researchers at Washington university managed a way to get it done. They converted simple red brick into an energy storing brick called a supercapacitor.

Getting Deeper Understanding in a simple way:

The red colour of the usual bricks is due to presence of hematite, which is ore of iron. Interestingly, state-of-the-art energy storage devices and batteries are also made from the same hematite ore. This triggered the thought of combining one with another. Using polymer technology and microstructures present in the conventional bricks, chemists have done the transformation.

They developed a supercapacitor, using common brick’s hematite microstructure as reactant. Then they deposited a fine layer (technically termed as nanofibrillar) of conducting polymer.

Polymer was PEDOT- poly (3,4 ethylene-dioxy-thiophene). Click here to see the structure of polymer.

This leads to coatings offering high electrical conductivity and easy charge transfer. This makes them to use as ideal electrodes for the technology.

Also, these devices are made water resistant by applying epoxy coating encapsulating the structure. This even makes them to store the charge carriers from temperatures between -20oC to 60oC.

understanding the formation of technology
understanding the formation of technology (image: Nature)

Simply, in the figure a.

Red brick is dissolved in the vapour of HCl acid at 160oC high temperature. As red bricks contain hematite and it is ore of iron (Fe), this dissolution liberates Fe3+ ion. This promotes hydrolysis with water, and forms FeOOH nuclei. These nuclei are the reason which initiates and controls the polymerization reaction. Finally, EDOT, which is monomer of PEDOT initiates to polymerize. With different reaction conditions, the thickness of polymer coating can be varied accordingly.

In figure b.

Generation of two different types of bricks is shown. One is surface polymerized, which is sort of partially polymerized. And another is fully polymerized monolithic PEDOT bricks.

Figure c.

Simply depicts that how concentration of the acid which is used can vary the length of the complete polymer. It shows that using high concentration of acid promotes the quality of final polymer that is formed.

These bricks, when connected in series connection coated with epoxy, produce a stable stationary waterproof supercapacitor module. Also, this brick provides excellent structural stability and open microstructure (as mentioned earlier) results in robust PEDOT-coated brick electrode.

Thus, coating of PEDOT polymer done by cutting the brick with diamond saw, provides better conducting abilities. This is comprised of nanofibers that penetrate the inner porous network of a brick. Ultimately, this polymer coating remains trapped in the brick and serves as an ion sponge which stores and conducts electricity.  

Merits and future of energy storing bricks:

With increasing needs of power and electrical energy requirements, this technology can be a boom. As mentioned earlier, the epoxy coating applied on to the bricks enables the underwater usage of bricks possible when submerged in the water.

hypothetical brick wall
hypothetical brick wall (image: Nature)

Scientists proposed this hypothetical brick wall to demonstrate usage and efficiency of the technology.

Figure a.

Wall has each brick of 8 inches × 4 inches × 2.25 inches and two faces of them covered with nanofibrillar PEDOT coat. In the above figure, red colour represents brick and blue shows PEDOT.

Figure b.

These bricks are then joined to form wall using 1.43 cm. gel electrolyte (light blue coloured) and 1.43 cm. epoxy coat (grey coloured).

Figure c.

Front wall area is shown green and calculations provided by researchers shows that this wall can provide maximum capacitance of 11.5 kF m-2 and energy density of 1.61 Wh m-2.

brick directly powering a green LED light
energy storing brick directly powering a green LED light (image: Daily Mail)

Gel electrolyte used while fabricating the PEDOT- coatings on the bricks, extends the cycling ability to 10,000 cycles and provides nearly 90% capacitance retention and almost 100% coulombic efficiency. This much high percentage of retention simply means that energy storing bricks are very efficient while using and proves to be great power storage and supply devices.

Talking about efficiency and electrical supply, these supercapacitor brick modules can provide a 3.6 V voltage window while three of them are connected in series connection.

With synthesis of PEDOT polymer via special polymerization technique called oxidative radical polymerization, properties like low electrical resistance as well as high chemical and physical properties can be achieved.

Looking to the above image, interestingly, it is now possible to connect bricks with solar cells and provide emergency lighting. Typically, connecting 50 bricks in close proximity to the load, can enable emergency lighting for 5 hours. Also, with changing and advancing solar cell technology, these methods can also be further modified.

Thus, with these energy storing bricks, chemists have redefined the ways of power storage. These could be the potential ideal building blocks for each of the constructions around the world and can change the future of building technology. They also believed that apart from making new bricks, it is also possible to manufacture them from recycled bricks, providing it a sustainable future.

indeed,

sometimes getting better with and from basics of our building blocks, can change the whole scenario of our own blissful future.

–OSD

SPIN – The Most Arduous Concept in Physics

4

The story of quantum mechanics is a very weird one. Because that is how the quantum world is, totally counter-intuitive, absurd, and nonsensical. The reason why it is so is that our brains are adapted to understand the world as we can see it, and whatever we see is only the classical approximation. Humans tend to search for logic in a statement based on their experiences. The quantum mechanical absurdity seems unreasonable because there is no way we can experience these in our classical perception. Spin too is one of such phenomena we stumble in understanding.

The Story of Spin

In the initial formulations of quantum physics, things were relatively easy. You wanted to explain an electron, just take the mass and charge and you had the complete description of it. Nothing else was required to explain an electron. But unfortunately to the physicists, the tables got turned when Stern-Gerlach performed their silver atom experiment.

What they did was heat some silver atoms in an oven, and collimate the silver atoms from the oven forming a single beam and then pass it through a magnetic field, say in the z-direction. 

Stern-gerlach setup
The Stern-Gerlach experiment setup

To oversimplify the model, the silver atom is entirely neutral, and it has a magnetic moment which depends only on the outermost electron.
Since the orientation of all the atoms is random, the expected outcome is a spread along the z-axis with a gaussian distribution. This is what anyone with their right minds would expect.
But the result totally revolutionised the world of physics. The beam had split exactly into two parts. About half of the atoms had gone upwards and the other half downwards.

Outcome of the experiment
The outcome of the experiment.
(Image: docplayer.net)

This could only mean that the electron had an angular momentum of its own causing a magnetic moment. This angular momentum was discrete, i.e only in two possible directions, either upwards or downwards, and nothing in between. Since this angular momentum was as if the electron was spinning independently about an axis of its own, it was called the spin. This spin can indeed exist in any possible direction, but upon measured, we get only one of the two possible directions.

By then another physicist, sir Wolfgang Pauli, while trying to explain the structure of an atom and the states of the electron, gave his exclusion principle and also stated that the two electrons possible in a given orbital had opposite spins thus making them non-identical and hence allowing the exclusion principle to be satisfied. 

Click here to read more about the exclusion principle.

What is it and Where does it come from?

Spin is the property of all particles just like mass and charge are. Angular momentum is a conserved quantity, and so is spin. Also, the total angular momentum for a system is not conserved unless we count in the spin. This is the reason spin is associated with the intrinsic angular momentum of particles. It is also responsible for the magnetic moment of these particles. 

Spin-magnetic moment relation.
An illustration about how spin is linked to magnetic moment.
(Image: Byju’s)

The mathematical description of quantum mechanics in the early stages had no signs of spin. This is because initially, it was as if it had no reason to exist mathematically. It was something that was introduced only to explain the phenomenon like the Stern-Gerlach experiment, or the states of electrons in an atom and so on.
But when P.A.M Dirac amalgamated relativity with quantum physics, it was soon noticed that spin was an essential requirement of the relativistic quantum theory. It arises from the rotational symmetry of nature.

Spin and Symmetry

As stated above, the spin arises out of the rotational symmetry of fields in nature.

Consider a scalar field, where every point in space is associated with a scalar quantity, or to put simply, with a number. If you take the field at any point and rotate it by any angle, the number at that point does not change. This scalar field is called spin-zero field. The Higgs field is an example of the spin-zero field.

The vector field has every point in space associated with a vector. Now if you take a vector at some point and rotate the coordinate system, the components of the vector do change. But a rotation by 360° brings back the original vector. Thus the vector field is invariant under rotation by 360°. This vector field is called a spin-one field. The electromagnetic field is an example of the spin-one field.

There are fields in nature which are invariant under rotation by 180°. These are called tensor fields, with each point in space associated with a tensor. These tensor fields are called spin-two fields. The gravitational field is a tensor field and hence is a spin-two field. 

From QFT, every particle has an associated field and vice versa. So the particle corresponding to the scalar field will have zero spin, the one corresponding to vector field will have spin one, and the one corresponding to the tensor field will have spin two.

Thus the Higgs Boson is a spin-zero particle, photons spin-one, and gravitons spin-two. 

What we can notice here is that spin-one field is invariant under rotation by 360°, spin two is invariant under 180°, and so on. So from the same pattern, it is obvious that a spin “n” field would be invariant under rotation by (360/n)°. Thus a spin ½ field should be invariant under rotation by 720°. Such a field is called a spinor field, with every point in space associated with a spinor. A spinor is a “weird” type of vector, which upon rotation by 360°, gets reversed. The electron field is a spinor field, invariant under 720° rotation. Thus the corresponding electron has spin one half.

Spinor
A spinor.
Notice that upon single rotation of the cube, the whole system is inverted. The system gets back to its original state only upon completion of two rotations.
[GIF: gyfcat}

The Physical Meaning of Spin

Most people believe that the electron is actually spinning about some axis to generate the spin angular momentum.
This is entirely wrong because the electron is considered a point particle, and for a point particle to generate a finite angular momentum, it should be spinning infinitely fast.
If you argue saying that electron is a wave packet with some finite size, say the Compton wavelength,  then too, the electron’s surface must be spinning at a velocity greater than that of light. So the intuitive of an electron spinning is a wrong one and should be discarded.
But what about fields, fields can have angular momentum. And any particle is an excitation of its underlying field. So a possibility is that the spin of particles as we see it could be, in fact, the angular momentum of the corresponding field. The angular momentum carried by the electron field could be present to us as the spin of the electron and so on.
But all of this is still only a hypothesis. No concrete idea is present for the physical meaning of spin. That is why it is not a wonder that it is one of the most confusing aspects of quantum mechanics.
To conclude, it is best to quote Feynman,

” I think I can safely say that nobody understands quantum mechanics.
If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

A Special Constant That Governs the Existence of Our World

8

Our World is full of Paradoxes; contradictions that challenge our notion of reality and Understanding. As absurd it may seem but the propositions are always found to be true.

By denying Principles, one may maintain any Paradox!

Galileo Galilei

The Most famous paradoxes of all time include Schrodinger Cat, Dark matter, Quantum Zeno Effect, and many more(not familiar to me). But we are here to talk about “Fine Structure Constant” denoted by the Greek letter α, a constant that has baffled scientists for more than a century. What intrigues physicists is not it’s origin but the value it seems to hold. The Great Explainer aka Richard Feynman called it “A Magic Number” or “One of the Greatest Damn Mysteries of Physics.”  

Paradox and nature mysteries related to the constant
Solving Paradoxes can be a key to understand Nature Mysteries. [Image: slideshare]

The Bohr Model Failure

In the era of wave-particle duality, Louis de Broglie introduced a formula known as “De-Broglie Wavelength”.

λ=h/p  ; p= mev

At the same time, Niels Bohr was constructing a simple model of the hydrogen atom. Using De-Broglie’s hypothesis he stated that electron could only revolve in an orbit whose angular momentum was an integral multiple of De-Broglie wavelengths. In the Bohr model, a single e¯ revolving around the nucleus made a circular orbit. Considering duality the e¯ moved in a wave similar to that of a wave generated in a fixed string. So the circumference covered by e¯ equals the length of the wave.

animation showing standing wave mechanics
Animation showing Standing wave mechanics. Proving length is twice the integral multiple of half-wavelengths; L = 2*nλ/2 [Source- Physicsclassroom]

L = ; circumference= 2πr= L ; 2πr= nλ ; 2πr= nh/p ; mevr= nħ

Doublet in Hydrogen atom absorption spectra
Fine spectrum of Hydrogen atom not explained by Bohr [Image: Wikipedia]
De-broglie model of atom
De-Broglie hypothesis used by Bohr in his model of hydrogen atom [Image: SlideShare]

Quantizing the angular momentum, Bohr was able to calculate the energy difference between different n levels. He was able to explain the origin of different spectral lines of the hydrogen atom. Like the red Hα line was interpreted as a jump of e¯ from n=3 to n=2. Later on, it was discovered that the red line was a doublet which was termed as ‘Fine Structure’ of lines, an anomaly that could not be explained using the Bohr model.

The Orginal Origin

Arnold Sommerfeld thought he could improvise the Bohr model to explain the anomaly. Instead, he said that the orbit could be both circular and elliptical. He introduced a quantum number k(any integer except 0) where n/k= length of major-axis/length of minor-axis. With an increase in the value of k, ellipticity decreases and becomes circular when n=k.

Sommerfeld model of atom
Sommerfeld Elliptical Orbits [Image: ChemistryOnlineGuru]

centripetal force diagram of electron
(Image: ZHydrogen)

Sommerfeld suggested that orbits are made of sub energy levels, and k was called “Azimuthal quantum number” which determines the orbital angular momentum. According to this model, he also considered the effect of variation of mass with speed. Summing the whole process, the final expression for Total Energy came out to be:

W(n,k)= -Rhc/n2[1 + α2/n2 (n/k – 3/4) ] (interesting isn’t it?! )

Sommerfeld found that the energy difference between levels E(2,2) and E(2,1) explained the doublet anomaly. After that, he sought the discovery of a constant that was missing in the equation. When e¯ revolves around a nucleus, a centripetal force acts on it which is provided the coulomb’s electrostatic force.

FC = mv2 /r = e2/4πe0r2 ; mvr= nħ ; K.E= mv2= e2/8 πe0r ;  P.E= -FC*r= -e2/4πe0r

So total energy is, En = P.E + K.E = -e2/8πe0r …..①. Now, mevr= nħ, v2= e2/4πe0mr =( nħ/mr)2.

From this, = e2m/4πe0(n2ħ2), substituting in ① we get,

En = ( e2/4πe0nħ )2*m ; En= ( e2/4πe0nħ )2*mc2/c2 ;  En= ( e2/4πe0nħc )2*E0

Re-writing, En=  α2E0/n2,where α= ( e2/4πe0ħc)

Following the MKS units and fundamental values , α≈ 1/137 [Dimensionless].

The Alpha Effect

After the contribution of Sommerfeld, α appeared in many calculations and its value remained the same irrespective of the dimension system. It is also known as “Coupling Constant”, which determines the strength of electromagnetic interactions. With the Schrodinger model and Uncertainty Principle, it became clear that atoms do not have fixed orbits nor e¯s have fixed speeds and energies. These discoveries changed the whole Atom game but α remained the same.

Since α determines the electronic energy levels in atoms, scientists were curious if changing the value would change anything. In 1950 Fred Hoyle and others mapped out the detailed process by which Stars produce heavy elements such as carbon, oxygen, etc, and the Formation of the Star itself. They observed that the abundance of carbon in the universe could only be accounted for if α had a value that favored the fusion of helium nuclei to produce carbon than any other element.

Another study also showed that if you change the value of α even by as little as 4% stars would not be able to sustain nuclear reactions happening in their cores. Even Wolfgang Pauli was obsessed with this number and he famously quipped,

When I die, my first question to Devil will be: What is the meaning of Fine Structure Constant?

The Inconstant Constant Paradox

For long there has been speculation as to how constant alpha is? Does its value change over space-time? One may say that this contradicts the definition of constant in the first place. In 1937, Paul Dirac wrote to Astronomer Arthur Eddington’s attempts to derive the Constants from scratch, “How can we be so sure that constants have not evolved over cosmological time?”

 In 2010, John Webb along with his team observed that α had changed since the beginning of the universe. Webb and his team collected data from the Keck Telescope showing various Absorption spectra of quasars. He said, “Changing α you change the degree of attraction between e¯ and nucleus”.

This changes the wavelength absorbed by the e¯ affecting the Absorption spectra, meaning that Absorption spectra are a kind of barcode unique to the value of α. Analyzing the data, they discovered that it had increased by an average of 6 parts in a million (not quite the change you were expecting). This gave a possible hint that cosmological constants are not so constant after all.

Artistic creation of super massive blackhole
Artistic Representation of Super massive black hole Sagittarius A* [Image: ScientificAmerican]

Another study conducted near the biggest black hole at the very heart of our galaxy “The Saggitarus A*” suggested otherwise. Researchers observed five Stars that cruised around the SMBH and collected data of their absorption spectra. Fortunately for physics lovers, the constant showed no sign of variation even near such extreme gravitational conditions. Perhaps this is a number written by GOD itself and is intrinsic to our very existence and creation of the Universe as we know it.

The Biggest Mysteries of Nature hide in the most Plain sight waiting to be nurtured.

RDX

Stellar Evolution Part 1: The Formation of New Stars

The beautiful night sky is filled with thousands of stars that we can see and millions of which we cannot see, with about 4800 stars being born every second. These stars have a life cycle of about 40,000 years to about 10 billion years. However, all these billions of stars in the Universe are at a different point in their life cycle. In this topic, we will be discussing about how these stars are formed and how do they die. This topic will be having two articles where this article will discuss on how stars are born and the other one will explain on how stars die.

Formation of a New Star

Stage-1: Interstellar Cloud or Stellar Nurseries

These Interstellar Clouds are filled with filled with dust particles and gases. They are dense and vast, spanning sometimes to about tens of parsecs (1014 to 1015 Km) across. They consists of about 10-4 to 106 particles per cubic cm (cm3) and usually filled by 70% of Hydrogen gas and remaining is filled by Helium. The typical temperatures of these clouds is about 10K and have masses of more than a thousand times to that of the Sun (which is 1.989 × 1030 Kg). The dust these clouds contain are important as they help cooling the clouds as it contracts and plays a role in star and planet formation.

The NASA/ESA Hubble Space Telescope captured this billowing cloud of cold interstellar gas and dust rising from a tempestuous stellar nursery located in the Carina Nebula, 7500 light-years away in the southern constellation of Carina.
The NASA/ESA Hubble Space Telescope captured this billowing cloud of cold interstellar gas and dust rising from a tempestuous stellar nursery located in the Carina Nebula, 7500 light-years away in the southern constellation of Carina. (Image: ESA/Hubble)

Now, the cloud begin to collapse under its own gravity and once it compresses past the point where gravity overcomes gas pressure, it is believed that it will fragment into small clumps of matter due to gravitational instabilities in the gas. As the fragments shrink, the average temperature of it is not much different from the parent cloud the reason being that the gas constantly radiates energy in large amounts into space. The inner regions of the shrinking clouds have become opaque to their own radiation and start to heat up.

These temperatures reach to about 10,000K. However, the gas near the edge of this shrinking cloud is still radiating energy so they cool down. The density of this cloud has reached to about 1018 particles/m3, so now it begins to resemble a star.  The dense, opaque region at the center is called the Protostar.

Stage-2: Protostar

A protostar is a very young yet not fully formed star that is still gathering mass from the parent cloud or parent interstellar cloud. As it evolves, it shrinks, its density and temperature increases, both in the core and at the photosphere (which is the outer shell of a star from which light radiates.). After about some 100,000 years, the temperature at the center reaches to about 1 million K (1,000,0000K). This leads to the start of proton-proton nuclear reactions that fuses Hydrogen to Helium.

HBC 1 is a young pre-main-sequence star or a protostar
HBC 1 is a young pre-main-sequence star or a protostar (Image: Wikipedia)

The luminosity of this protostar is about 1000 times to that of the Sun (which is 1L⊙ or 3.846 × 1026 Watts) now as the nuclear reactions haven’t yet begun in the core, the luminosity is emitted out of it due to the release of gravitational energy as it continues to shrink. The protostar isn’t yet in the state of equilibrium even as the temperature of it is pretty high that the outward pressure has become strong enough to counter the gravity’s inward pull, thought the balance isn’t perfect yet.

Also the internal heat of the protostar slowly diffuse out of the core to the outer cooler surface or the photosphere, where the heat radiates into space. Due to which the contraction slows down gradually, but isn’t stopped completely.

Hertzsprung-Russell diagrams or H-R diagrams
Hertzsprung-Russell diagrams or H-R diagrams (Image: Wikipedia)

As time passes by, the protostar gradually approaches the main sequence which is a continuous and distinctive band of stars that appears on plots of stellar color and brightness, the plots are kept on the Hertzsprung-Russell diagrams or H-R diagrams. Thought the initial contraction and fragmentation of the cloud was rapid, but as the protostar evolves and nears to being a full-fledged star (or a proper newborn star), the evolution slows down. The contraction rate is usually calculated by the rate at which the protostar radiates its own internal energy into space. As the contraction rate decreases so the luminosity of the protostar.

Stage-3: Newborn Stars

Now when an object of one solar mass (1.989 × 1030 kg) shrunk to a radius of about 1,000,000 km (1 million km), its temperature due to the contraction is going to be about 10,000,000 K (10 million K) giving it enough energy to ignite nuclear burning (when in a star, depending on the temperature in the core, there is the production of heavier and heavier elements). Which leads protons fusing into helium in the core and a star is born. The newborn star’s temperature at the surface is about 4500K.

Now for stars, if their surface temperature is low then its luminosity is less which also means that no matter how big the star is the size doesn’t matter but how hot it is. Now once the star stars fusing with hydrogen, it established as inward gravity and outward pressure hydrostatic balance or also called hydrostatic equilibrium.

This illustration depicts a view of the night sky from a hypothetical planet within the youthful Milky Way galaxy 10 billion years ago. The heavens are ablaze with a firestorm of star birth; glowing pink clouds of hydrogen gas harbour countless newborn stars, and the bluish-white hue of young star clusters litter the landscape.
This illustration depicts a view of the night sky from a hypothetical planet within the youthful Milky Way galaxy 10 billion years ago. The heavens are ablaze with a firestorm of star birth; glowing pink clouds of hydrogen gas harbour countless newborn stars, and the bluish-white hue of young star clusters litter the landscape. (Image: ESA/Hubble)

Depending on the surface temperature and luminosity, the star may remain unchanged for about the next 10 billion years which is actually the main sequence. Our Sun itself is currently in the middle of its own main sequence.

Tune in next week to learn more on how do stars die (Click Here)

Water: Greatest or the weirdest liquid on Earth?

3

Water is transparent, it is in the rain, snow, steam, oceans, rivers, and at most of the places on Earth. Water is life support for the entire living kingdom and the third most abundant molecule in the universe. This is the view held by half of the population and term water as “boring”. The other half of the world thinks of it as a magical liquid used in homeopathy and some practices like water memory, polywater, structured water. But water is turning out to be stranger than we could have ever imagined.

From steam to ice, water continues to mystify.

– Richard Saykally

Scientists and researchers have observed the deceptive and complex nature of water molecules. Some astounding faces of water have been revealed that were unknown for ages. With the discovery of unusual forms of water, secrets of icy planets like Neptune and Uranus can be predicted. Water has at least 66 properties that are different than most of the liquids. Ice the solid form of water exists in different forms based on various arrangements of its constituents.

Molecular observations of water show that its molecule consists of 2 hydrogen atoms bonded to an oxygen atom and this makes it a highly polar molecule, thanks to the large electronegativity difference between hydrogen and oxygen.

As a result, hydrogen bonds are formed and molecules tend to stay closer and this what makes water a wet liquid. Apart from these facts, water has some more unique qualities that are not so common among other liquids. Ice, a solid form of water, does not melt the way we thought it did. Instead of melting all at once, in a sort of continuous process, it goes layer by layer. The top layer is just 45 nm thick, which measures 1\1000 the thickness of human hair. A new study shows that water is not liquid even at -38 ℃. Below are more such revelations that make water a bizarre liquid:

Two different phases of water

Water exists in two distinct liquid phases - UPI.com

Researchers at Stockholm University, Sweden discovered that the water we love and know exists in two different phases and not as a single liquid. The two phases are said to differ in density and structure and what we see under normal conditions is a fluctuation between the two phases. Until now, ice was known to exhibit different density and structures. Although most of the people are not familiar with the fact that most of the ice around us is its amorphous form and the other being crystalline has more orderly arrangements of molecules.

These forms of ice tend to switch between high and low density and this is what suspected scientists that the liquid form of water could show similar behavior.

The remarkable property found in the research was two different phases of water at low temperatures where the crystallisation of water is slow. Two different types of X rays were engaged in the experiment to find the distance between the H2O molecule and its movement during the transition of the molecule from an amorphous, glassy, frozen liquid state, to a viscous liquid, and then another, even more, viscous liquid with lower density.

Even the pioneer of X-ray, Wilhelm Röntgen found evidence for two different phases of water and stated that fluctuations between the two can give rise to strange properties. These results give us an overall understanding of behavior of water at different temperatures and pressures as well as the affect of salts and biomolecules that support life. In view of global climate change, purification and desalination of water would be a major challenge.

Black and hot form of ice

Scientists have discovered a new form of ice using diamonds and some ultra-powerful lasers. Ice is known to exist in 17 different forms but the 18th form has surprised the scientists. Instead of being white and hard, this form is black and hot, according to a computer model predicted in 1988. The simulation suggested a new form of ice occurs at temperatures as high as 2000 K and pressures above 100 GPa. Such extreme conditions rip the hydrogen atoms off their oxygen mates. Hydrogen atoms lose electrons and become positively charged protons.

These protons flow through the cubic lattice packing of oxygen atoms like a liquid. A high concentration of freed protons can conduct electricity possibly like arsenic or graphite, only differing in the charge carriers. Predicting such a superionic model of ice on an old computer is one thing, but observing practically is quite another. To create such a form of ice, scientists compressed water molecules between diamonds to build up pressure and blasted it with six highly powerful lasers.

Lasers were used in timed pulses fashion to avoid exceeding the predicted melting point of this ice form. Laser pulses explosively vaporized the diamonds containing the water creating a massive spike in temperature and pressure. To find out what happens during the peak in pressure and temperature, scientists shot 16 more lasers beams on a piece of iron which vaporized and sent X rays through the water. Based on the data obtained from X-ray diffraction, scientists confirmed the formation of cubic lattice arrangements of oxygen atoms which matched with the prediction of the 1988 computer model.

New superionic ice phase could shed more light on icy giant cores – Physics  World
In this time-integrated photograph of an x-ray diffraction experiment, giant lasers focus on the water sample, which is sitting on the front plate of the diagnostic tool used to record diffraction patterns. Additional laser beams generate an x-ray flash off an iron foil, which allows the researchers to take a snapshot of the compressed and heated water layer.
PHOTOGRAPH BY MILLOT, COPPARI, KOWALUK (LLNL)

Ice XVIII has become prime interest of planetary scientist, because icy giant planets like Uranus and Neptune have the condition for ice XVIII to exist. It could explain odd phenomena that take place on these planets, like the shape of magnetic fields. Earth and some other planets have magnetic field similar to that of a bar magnet, apparently due to the conducting matter swirling inside the core. But these icy planets have lumpy magnetic field with multiple poles that suggest interior made up of ice 18. Although, a lot of research work is needed to support this theory and study other molecules like methane and4 ammonia present on these planets.

This is not all with the weirdness of water. Read more bizzare things about water on thehavok.com in further posts.

Why does matter occupy space?

3

We all might have heard that most of the space inside an atom is empty. To be precise, it is 99.99% empty. So, here is a question for you, why is the table in front of you solid? Why isn’t it squishy? Why the atoms do not just sit on top of each other and occupy less space? So, here is the answer to all such questions: This is because the electrons which are responsible for the size of the atom is a ”fermion”. Now, the question arises, what is fermion? What does it do? What does it have to do with anything related to that? We’ll answer all of your questions in this article.

We’ll talk a bit about Quantum Mechanics, Elementary particles and Quantum Field Theory. We won’t go deeper into the concepts but we will use some basic consequences of it. The quantum mechanical description of a particle is given by something called a ‘wavefunction’ and is represented by 𝛹(x). Now let’s consider two identical particles, say two electrons. The wavefunction of the system considering both the particles together will look like 𝛹(x1, x2).

A question might arise, what do you mean by ‘identical’. QFT says that all elementary particles are associated with some kind of field, like an electron is associated with electron field and it is nothing but a vibration or excitation of the field. So saying that we have two electrons, we say that both of them are the vibrations of the same underlying electron field, one at position x1 and other at x2. So, both of the electrons have to be identical.

So we take two identical particles (i.e electrons) represented by a wave function 𝛹(x1, x2). Now, what will happen if we interchange one with the other? Physically speaking we would not see anything different. i.e we would never be able to say if the electrons are interchanged or not. The physical reality of both these situations is the same. In QM, we talk in terms of probabilities, which is given as |𝛹|2. So after interchanging, the wavefunction becomes 𝛹(x2, x1). Same physical reality implies that the probability must be the same.

Therefore,  \left | \Psi (x_{1}, x_{2}) \right |^{2}=\left | \Psi (x_{2},x_{1}) \right |^{2}

These both solutions do exist in nature

The particles that correspond to the solution \Psi (x_{2},x_{1})=-\Psi (x_{1},x_{2}) are called “FERMIONS”. Their wavefunction is of the form \Psi (x_{1},x_{2})=\Psi_{1} (x_{1})\Psi_{2} (x_{2})-\Psi_{2} (x_{1})\Psi_{1} (x_{2}). Here you can see that if \Psi_{1}=\Psi_{2}, then \Psi (x_{1},x_{2})=0. This means that we cannot have two identical fermions at the same place.

This is the “Pauli exclusion principle” which states that we cannot have two identical fermions at the same place.

This is what the actual definition of fermions and bosons is. The other definition you might have heard is that fermions, the spin is half-integral. 1/2, 3/2, …… and for bosons, spin is given by integers, 0,1,2,…… . This is also true but this cannot be used as the definition of fermions and bosons.

We saw that fermions cannot pile on each other but bosons can. Bosons do indeed pile on each other and form what we call “classical fields”. Eg: photons form the electromagnetic fields, graviton- the gravitational field and so on.

This is the reason we could detect these classical fields a lot earlier, even before the theoretical formulation.

The fermions form the basic building blocks of the matter. Fermions consist of 6 Quarks and 6 Leptons. The electron we know is one of the 6 Leptons. These quarks and leptons make up all the matter around us.

So we can say that,

Bosons\rightarrowForces (classical fields)

Fermions\rightarrowMatter.

Now coming back to the initial question, why do atoms occupy space and not pile on each other, the reason is simply that they can’t. Electrons of the different atoms or same atom cannot occupy the same position and state, because it is prohibited by the Pauli exclusion principle. If we try to bring the electrons too close, they repel each other by electrostatic repulsion and also by what we call “fermi-pressure”. They are not squishy because we simply do not have enough energy to do that. The magnitude of forces between two electrons is so strong that some of them together are equivalent to a nuclear bomb.

That is the reason the table in front of you is solid and when you press it down with your hand, it exerts a fermi-pressure and you can’t just squeeze it.

Read other articles related to Science only at thehavok.com

Molecules of the Year 2019: Exciting compounds that chemistry has got

Last year, researchers from all over the world fused and fabricated most interesting and marvellous compounds of chemistry. These compounds can be used for betterment of whole world and proved that innovation in chemistry is conducted at ultimate level. Many physicists have said that chemistry is the messy part of physics. But they also believe that, chemistry can be messy to them when they invade it.

This is why, chemistry feels to be a jig-saw puzzle, but at times when these kind of compounds and molecules are involved, people are fascinated. These are compounds which not only fascinate, but make chemistry interesting and innovative. With chemistry like this, future is secured and so is the humankind.

Give a read to most amazing and beautiful synthesis from the previous year’s chemical innovations.

Unbelievable Chemistry: New allotrope of carbon accomplished

Allotropes is a broad concept of chemistry, and unsolved for many elements. Allotropy means that, one element can be existent in different forms based on different arrangement of atoms. For example, carbon has allotropes like graphene, diamond and charcoal.

newest carbon allotrope (Image: cen.acs.org)

Chemists started with a multi-ring precursor (left) and used voltage pulses to pick off carbon monoxide molecules to form intermediates (center left and right) on the way to making cyclo [18] carbon (right).

The picture above looks hazy, but is taken with atomic force microscope. Researchers made cyclo [18] carbon, which is an 18-membered ring of carbon. This ring is joined together with the help of alternating single and triple bonds.

“It’s both an allotrope and a molecule, which is why this synthesis is so sensational,” says Rik Tykwinski, a physical organic chemist at the University of Alberta

This new allotrope has high reactivity which forms covalent coupling between molecules. Covalent coupling is the formation of single covalent bond between polymeric surface and bio-molecule. small bio-molecules can be immobilized using this. Following this technique, it opens new paths for synthesis of such carbon allotropes and carbon-rich materials.

Anti-aromatic Ni(II) Norcorrole Nano-cage Synthesised:

Since past many decades, molecular cages, hosts and non-porous materials with nano-metre-sized cavities have been reported. Such nano-cages are widely used in molecular recognition, separation, stabilization and promotion of unusual chemical reactions. Also, they are successful drug delivery platforms due to their perfect structures. They are reliable because of properties like bio-compatibility, biodegradable nature and low toxicity.

Majority of these nano-cages or nano-spaces are surrounded by aromatic walls and cages confined by anti-aromatic walls has not yet been synthesised. This is because, instability of anti-aromatic compounds and its unknown effects on properties of nano- spaces.

Although knowing these factors, researchers still demonstrated the construction of anti -aromatic-walled nano-cage which is self-assembled nano-cage composed of four metal ions with six identical anti-aromatic walls.

This was made possible, as they made these Ni(II) Norcorrole building blocks and added substituents and iron ions to adjust according to conditions and molecule self-assembled itself into tetrahedral shape.

antiaromatic nanocage

Molecules that land inside this nano-cage have their nuclear magnetic resonance (NMR) signals shifted downfield, depending on their location, 3 ppm (yellow) to 9 ppm (red). Blue sticks represent anti-aromatic walls; gray represents substituents on walls. Ni = green; Fe = red are substituents.

This cage showed unusual behavior beyond the NMR-spectroscopic frequency range. This can open new ways for further study of effects of an anti-aromatic environment on nano-space.

Longest, most twisted Perphenylacene (per-phenyl-acene) in chemistry synthesised:

Dodeca-phenyl-tetracene, is the largest prepared Perphenylacene and was synthesized from known compounds in three steps.

dodeca-phenyl-tetracene
(image: cen.acs.org)

From the figure It can be seen that, there are main 4 fused benzene rings in the centre. These are surrounded by 12 pendant phenyl rings and makes up the whole structure. Because of such surrounded rings, it becomes almost nonreactive. Thus, it displays reversible electrochemical oxidation and reduction reactions.

This compound is of high interest for electronics and photovoltaics due to its significant chemical and physical properties.

Chemists cage methane inside C60 fullerene:

Fullerenes or buckyballs, are the roundest molecules known in chemistry. They are allotropes of carbon. Fullerenes, are versatile and have antiviral properties. Because of this, they are used in the treatment of HIV-infection. Methane is the largest and first organic molecule to be encaged inside a Buckyball.

methane caged inside C60
Methane caged inside C60 fullerene (Image: online Wiley library)

Researchers synthesised the fullerene cage with a sulfur containing 17-membered ring, forced methane inside at high pressure, and closed the cage by oxidizing the sulfur. The remaining sulfur monoxide was ejected later. This whole synthesis was characterized with the help of mass spectroscopy, NMR spectroscopy and X-ray crystallography. After the synthesis, there were evidences of methane freely rotating inside the C60 cage.

This synthesis, opens opportunity of encaging even larger molecules like, NO, NH3, N2, CO2, CH3OH and H2CO. With this, new possibilities in the field of nanotechnology and nano-chemistry can be flourished.  

Chloride ion capturing Cryptand-Cage for chemistry:

cryptand

First of all, lets get familiar with cryptand. These are basically 3-D forms of crown ethers with more selectivity and are strong. So, now what are crown ethers? They are nothing but cyclic chemical compounds possessing ring with multiple ethers. This is structure of a cryptand.

crown ethers

Now, knowing these we can move further. Any bio-molecular structure or compound have synthetic receptors which are associated with O-H or N-H hydrogen bonding. This give them high selectivity and tight binding. But, contrary to this, researchers developed a chloride-selective receptor in the form of cryptand-like cage. This too, they have done with only C-H hydrogen bonding, which are considered weak hydrogen doners and offer weak hydrogen bonding.

cryptand cage
Cryptand cage which can capture chloride ion (Image: cen.acs.org)

This compound synthesised has a corrosion inhibition property. It means that, it can slow down the corrosion rate of metal or alloy that comes in contact with fluid. Also, it shows anti-Hofmeister salt extraction. This effect in simple terms is related to the solubility of proteins. (click here to know more)

Catenanes- The interlocked rings of benzene

Nanotechnology and especially nano-carbons are boom to the field of science and technology. Researchers at Japan have developed this dynamic topological molecular nano-carbons, and stimulated its deeper understanding and applications. They have composed catenate and trefoil structures exclusively from carbon and hydrogen.

catenanes of chemistry
catenanes: The interlocked rings of Benzene (Image: cen.acs.org)

They here linked different phenyl rings end to end into macro-cycles. These macro-cycles met at silicon centers in the middle. After this was achieved, they removed the silicon with fluoride, and final products were achieved. This catenanes seem to be rigid in nature. But it was observed that these all-benzene ring structures possessed rapid vortex-like motion even at -950C. Thus, this was interesting dynamic behavior from such a molecular compound and extended deeper study of such topological molecular nanocarobons.


This was all about the coolest chemistry witnessed from last year. Still, putting up all of the research work here was not feasible and is tiring. Mention one of your favourites from the above exciting compounds you want to know more about in the comment section below. With ideas and research, we will surely try to put up the compound of your interest.

Do comment for your favourite.

with such chemistry, achievements become frenzy, require hard work of many

and indeed trails whole science with excellency.

–OSD

Quantum Zeno Effect – A Watched Pot Never Boils

5

Quantum physics is a jungle of paradoxes. Any direction you go you end up with a paradox, waiting to pounce on you, its claws ready to slash through your common sense, making you wonder what is actually real, and ridiculing at your inability to comprehend it. Most of these paradoxes are not really paradoxes but they just showcase our inability to perceive quantum physics. Be it the Schrodinger’s cat paradox or any other, ultimately they have an explanation. Generally, we oversee and ignore it, until one day when we finally realize that we are completely wrong in our perception of it.

Quantum Zeno effect is a very interesting consequence of the mysterious quantum world. Before diving deeper into the effect, have a look at the classical counterpart of the same.

The Zeno Paradox:

Zeno of Elea was a Greek philosopher. He gave paradoxes which went against one’s common sense and made one question about the reality of his perception. One of his famous paradoxes is the arrow paradox:

If everything, when it occupies an equal space, is at rest at that instant of time, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless at that instant of time and at the next instant of time but if both instants of time are taken as the same instant or continuous instant of time then it is in motion.
— as recounted by Aristotle, Physics VI:9, 239b5, Wikipedia.

(Image: shutterstock)

Zeno states that for the motion to occur, the object must change its position which it occupies. For an arrow in flight, at a given instant of time, it can not be moving at all because time is not elapsing. Hence it must occupy the same position always. Thus if everything is motionless at a given instant, and if time is composed of many instants, then no motion must be possible.

In short, he states in his paradox that a flying arrow that is being observed can never fly.

This, even though it is counterintuitive, it is only a vague attempt to crack open our common sense. Ultimately it is just an argument and not a firm physical statement.

The Quantum Zeno Effect

This has inspired a beautiful effect in quantum physics, the quantum Zeno effect: An observed water pot in the quantum world never boils. Or to be put in a better way, a quantum system cannot undergo any changes while it is being observed.

This is not a paradoxical statement like Zeno paradox but is actually an observed and proven quantum effect. But to understand this paradox, you first need to know what is meant by observation in the quantum world, and how states evolve with time.

Quantum states and their evolution:

Consider a quantum system, which upon observed or measured gives one of the two possible values, either A or B. The state of the system is represented by |𝜓〉. Let us say that the system is in state |A〉if the measurement gives the value A and it is in state |B〉 if the measurement gives value B.

Quantum Superposition, which is the heart of quantum mechanics states that at any time, the system is in a superposition of all possible states. Mathematically it can be written as  |𝜓〉= C1|A〉+ C2|B〉, where C1 and C2 are two complex numbers, subject to the condition C12 + C22 = 1.

The system which is in the state |𝜓〉, when measured, wavefunction collapses. The collapse of wavefunction means that the system which is in superposition goes back to one of the pure states |A〉or |B〉. Thus, upon measurement, the system goes back to state |A〉 with probability |C12|, and to state |B〉with probability |C22|.

If you start with the state |𝜓〉=|A〉, the energy content of the system drives it towards superposition. So at a later time t, the system will be in state |𝜓〉=C1(t) |A〉+ C2(t) |B〉. This is how systems evolve in quantum mechanics.

The Zeno Effect

(image: hipwallpaper)

Now coming to our Zeno effect, let’s take a radioactive atom. This atom can be in either of the two possible states – Decayed or Undecayed. Let’s represent the decayed state by |D〉and the undecayed state by |U〉. Once we observe the system, we see either the atom is undecayed (in state |U〉) or it is undecayed (in state |D〉).

Initially, we take an undecayed atom. The atom is in state |𝜓〉= |U〉. Since the atom has some energy, it evolves continuously, ending up in the superposition state |𝜓〉=C1(t) |U〉+ C2(t) |D〉. Thus a measurement at a later time will give a decayed atom with probability |C2(t)2| and undecayed atom with probability |C1(t)2|.

Suppose initially the radioactive material was in an undecayed state, i.e C1(t=0) =1and C2(t=0) =0 . Now if we make a measurement on it after a very short interval of time t, then the system is most likely to be in the same undecayed state. This is because a very short period of time has elapsed and the system has not evolved much, meaning C1(t) ≈1and  C2(t)≈0.

So, each measurement you make will take the atom back to its undecayed state restarting the whole process. Which means that continuous measurement at small intervals of time will restrict any changes in a system. Even a radioactive atom having a mean life of 5 seconds, can be prevented from decaying for as long as we want, for minutes, hours, days or even years.

Zeno effect in a nutshell

So indeed in the quantum world, observed water pot never boils. A system can never undergo any changes when it is being observed continuously at short intervals of time. Even though it seems to be totally weird and wrong, it is indeed a physical effect which has been proven experimentally. But this effect requires distinguishable quantum states, and this is not the case in the classical world, where all states are continuous. Thus this effect is not observable in the classical world.

Read other articles related to Science only at thehavok.com

Graphene: The Most Promising Super-Conductor Contender

3

A state of frenzy arose in the Scientific World, with the discovery of graphene in 2004. Graphene, popularly known as “The Wonder Material”, held a promising potential owing to the exotic properties it displays. Graphene is a single layer of carbon atom arranged in a hexagonal honeycomb pattern.

Graphene can theoretically do anything , but move out of the lab.

Anonymous

Using graphene, researchers are now one step closer to uncover the mysteries surrounding Strongly-Correlated materials. These are a wide class of heavy fermions compounds that show exceptional electronic and magnetic properties. Properties include metal-insulator transitions, half-metallicity, and spin-charge separation which are explained by the behavior of e¯s or spinons. So far ultra-cold atom lattices were used to stimulate quantum materials.

The Exotic Phenomenon

Twisted Bi-layer Graphene
Twisted Bi-Layer Graphene with Moire pattern. (Image: ScienceAlert)

In 2018, Scientist Pablo Jarillo-Herrero and ‘Yuan Cao’ discovered an exceptional phenomenon that involves our two favorites words: Graphene and Superconductor. The team stacked two layers of graphene at a slightly offset angle, the ‘twisted graphene’ became either an insulator or theoretically a super-conductor. Angle 1.1° was among the first of the angles at which these exotic properties were observed and was coined as “Magic Angle”, while the system was coined as “Magic Angle Twisted Bi-Layer Graphene”(MA-TBG). Not only this but the research leads to the birth of a new field called “Twistronics”.

About Mott Insulators

We all know that the “Band Theory” explains the concept of conduction through conduction and valence bands. In a metal, these overlap while in an insulator they have a gap called “Band-Gap”. These Bands or DOS is a mathematical equation that gives us the number of quantum states that an e¯ inside a solid metal can take. The main twist occurs when you have exactly a half-filled band(HFB) in metal, and you expect to see similar properties but instead, it behaves like an insulator. These type of insulators that should behave as a metal but aren’t, are called “Mott-Insulators”.

When a metal has an exactly HFB, the e¯ should be able to move from one atom to another but the Coulumb Repulsions stop it from doing so, giving rise to a Mott-insulator. These insulators are parent compounds for high TC Superconductors.

Unconventional Superconductivity

Unconventional Superconductors are material that show superconductivity which cannot be explained by BCS or Nikolay Bogolyubov’s theories (or its extensions). This MA-TBG falls into this category. In BCS theory, a pair of e¯’s called a “Cooper pair” bound together in metal at low temperatures(just above T0.). These cooper pairs have energy lower than Fermi Energy.

Here T0 refers to absolute zero temperature.

Fermi-Energy

Fermi Gas Model
Fermi Energy Model of hypothetical Fermi Gas. [Image: Fermi Gas Model]

When metal is at T0, the wave function of valence e¯’s of all atoms in a metal overlaps each other. The e¯s cannot escape the atom, so they are trapped in a finite potential well. This overlapped region is called “Fermi Gas”, and since we assume it to be a gas, the e¯’s can be assumed as particles of an ideal gas. The classical approach tells us that at T0, all the e¯s have Ek=0.

So they all must occupy the same quantum state at T0, but Sir Pauli’s Exclusion Principle contradicts this approach(no two e¯ can have a set of identical quantum numbers, ever!). When the temperature is decreased to T0, all the lowest possible energy states will be filled, with each level containing max. of 2 e¯. The last possible energy state to be filled by e¯ is termed as “Fermi Energy”.

Fermi-Dirac Cone

Another unique property that graphene possesses is Fermi-Dirac Cone(it has at least a ton of unique properties). This is not rocket science, its easy to understand. If the energy of e¯ is expressed as a function of their momentum, we get parabolas for metal(partially filled) and insulators(bottom parabola filled with a band-gap). But Graphene has got something up his sleeve this time too.

Dirac Cones of Metal,insulator and graphene
Dirac Cones of Metal(half filled upper parabola), insulator(empty upper parabola with band gap) and Graphene(empty upper cone with a singularity) [Image: YouTube]

Starting with Kinetic Energy equation and modifying it:

E = mv2, E= (mv)2/, E = p2/2m; which suggests that energy and momentum have a quadratic relation and forms a parabolic graph. Surprisingly, graphene forms a cone instead of a parabola.

E= pv; suggesting that the e¯ move at a constant velocity just like photons and behave like massless particles.

#D Dirac cone structure of graphene
(a) Band Structure of Graphene showing Dirac points for conduction (b) Dirac Cones residing on MBZ. [Image: Research Gate]

In the above picture of TBG, depicting the 3-D band structure of the system. The points of contact of both the curves are called Dirac Points where the Dirac cones are located.

The Real Physics At Play

Now that we are up to speed with the terminology, we shall explore the phenomenon at hand. In the BLG system, a Moiré pattern is developed when twisting occurs. Just like putting two thin sheets of plastic on one another and rotating the top layer keeping bottom fixed. This pattern forms small zones called Mini-Brillouin zones. The corners of the Brillouin zone have a Dirac cone attached to each of them. When the system is twisted these cones come closer and interact with each other, at nearly the MA the cones become Flat!

Moire induced pattern and dirac cone overlap
a) Moire Induced Pattern is an interference pattern formed in TBG. b) Formation of Brillouin zones just like the unit cell to identify Bravais lattices. c) Overlapping Dirac cones of individual layers. d) Formation of Inter-Layer Coupling in Overlapped region. e) Flat bands were obtained at magic angle.
[Image: Crommie Research Group]
Overlapping of dirac cones
Band Structure of MBZ in TBG evolving from Θ=3.0° to Θ=0.8°. The Inter-Coupling layer becomes flat at Θ=1.1°. [Image: MIT]

As a consequence of this, the Fermi Velocity drops to nearly zero, and so does the Fermi energy at charge neutrality point. The main catch here is the HFB, which stops the system from being a superconductor. At about 4k the system transitions from metal to an insulator(very unusual transition).

Furthermore when you, apply a Magnetic field(parallel or perpendicular) the system transitions from an insulator to metal. At the molecular level in the HFB system, the e¯s exist in form of spin-singlets. The application of the magnetic field results in the polarization of these singlets, leading to the Zeeman splitting which closes the gap and starts conduction.

conductance as function of DOS
a) The graph of Conductance as a measure of charge density measured for device M1 (MA=1.160) in zero field (red trace) and perpendicular field 0.4T (blue trace). The trace shows V-shaped conductance near n=0 as well as insulating states near both ends. Near -2e¯ per unit cell filling there is a considerable enhancement in conductance showing superconductivity.
b) Four probe resistance measured at densities bounded by pink region, showing two SC domes are clearly visible next to Mott insulation state. The remaining region is labeled as metal. Highest TC = 0.5k. c) A similar plot for M2 (MA=1.050) is plotted with the highest TC = 1.7k. [Image: arVix.org]

Observing the graph of conductance vs charge-density, near -2e¯ an enhancement in conductance was detected, signaling the onset of superconductivity.

World Of Tunable Physics

The behavior of MA-TBG is beyond the explanation of weak coupling BCS theory, indicating strong electron correlations. The critical temperature: TC at which it occurred is 1.7k, highest among the known High TC superconductors. Also, the fact that the whole system can be controlled by applying gate-voltage, pressure, and controlling the DOS.

This experiment sure opens a new platform for highly tunable superconductors and hopefully, we will be able to use the technology in the future. But on the bright side, Graphene is being used to create a new class of Flexible Electronics, de-salination films and much more.

Sometimes, the simplest of the materials exhibit the most extraordinary properties.

RDX

Carbon dioxide: Threat to life or frenzy path towards sustainability

Lithium-Carbon Dioxide Battery

First Fully Rechargeable Carbon Dioxide Battery is Seven Times More  Efficient Than Lithium Ion |

Lithium-ion batteries have found their use in multiple domains. From electric vehicles to mobiles, laptops, and other handheld devices, all run on lithium-ion batteries, and what makes it so ubiquitous is its energy density and recharging frequency. Energy sciences is a domain that has thousands of people engaged to find the most suitable devices that have capabilities to meet ever-increasing energy needs.

Researchers at the University of Illinois (UIC), Chicago have tested a design of lithium-carbon dioxide battery prototype which can be recharged completely and run efficiently even after 500 consecutive cycles of charge-recharge process. But the story is not all about the number of recharging cycles, it’s about energy density- the amount of energy that can be stored in a compact shell and dissipated efficiently when in need. The energy density of this battery is seven times the current batteries in the market.

A typical battery used in electronic devices can have an energy density of about 256W.h/kg but this Lithium-Carbon Dioxide battery can theoretically achieve a whopping 1876 W.h/kg. This suggests that these batteries will be 7 times lighter while holding the amount of energy the same as traditional batteries. Electric airplanes can adopt Lithium-Carbon dioxide batteries while traditional batteries also prove to be good for electric cars and other electronic devices.

This project is already being researched by multiple institutes and laboratories, but what stopped them from reaching a final prototype was the decomposition of constituent ingredients of battery that hampered the charging-recharging process. In technical terms, when the battery discharges it produces Lithium Carbonate and carbon, recharging recycles lithium carbonate and carbon deposits over the catalyst, eventually damaging the battery.

A group of researchers from UIC used new materials in the experimental carbon dioxide in order to make recycling more efficient. Molybdenum Sulfide was engaged as cathode catalyst in a combination with a hybrid electrolyte which incorporates carbon in the cyclic process. The problem of carbon depositing on the catalyst is now solved using such combination of materials which form multi component composites.

In the world of science, computer modelling always plays a critical role in analysing results and reduce efforts to reach a solid solution. Modelling facility at ANL was used to derive the mechanism for reversible process of battery.

Though these finding are theoretical, commercial viability of lithium-carbon dioxide battery is not far away.

Living bricks that inhale carbon dioxide

What do you get when sand, bacteria, and sunlight are mixed together? A brick that self replicates and pulls CO2 out directly from the atmosphere. Synechococcus is a bacteria that uses photosynthesis to generate energy, rather than feeding on other living entities is combined with sand and gelatin, then placed in nutrient-rich water. Once photosynthesis is initiated, calcium carbonate, glucose, and oxygen are produced. Calcium carbonate is a compound that makes up nearly 4% of the earth’s crust and found as marble, limestone, and chalk. This compound is also an ingredient of cement and other building materials.

This research has the potential to replace the traditional, energy-intensive process that involves extracting raw materials like clay, limestone, and further process that require temperatures up to 1000°C. As a result, energy consumption and is responsible for about 7-8 % of global CO2 emissions.

Researchers around the world have been looking for lower energy and lower carbon building materials. Scientists at Massachusetts Institute of Technology (MIT) devised a new method for producing cement involving electrolytic reaction instead of using a furnace. German researchers, in 2010, developed a binding material for cement which is energy efficient and low in carbon emissions.

The Life Cycle of Living Building Materials
(Image: Matter)

The team working with Synechococcus bacteria claims to have found a long-run solution. Sand and gelatin form a perfect combination of a rigid framework that supports the multiplication of bacteria. For the experimental purpose, the brick formed was cleaved into two parts and placed in a saline solution. After a span of a few days, the bacteria multiplied and eventually, the pieces grew into a single brick. These observations were groundbreaking but certain limitations stopped this material from entering the markets.

The bacteria could thrive only under conditions and moisture levels and the toughness of the material is questionable, the strength of the material being the most important aspect for any material to be used as a building component. A scientific journal wrote: “Compared with a similar material that contained no cyanobacteria, the living version was 15% tougher in terms of resisting fractures. But it fell short of the resilience of standard bricks or cement, performing more like low-strength cement or hardened mortar.”

Despite some drawbacks, this application being a fusion of biochemistry and biotechnology can find its application in remote places where resources, man force, and money are not easily available.

Besides these applications, Carbon dioxide has been a part of numerous researches and with this in view, the future world is predicted to be advanced in energy storage solutions.

Read the previous blog Carbon dioxide: Can a greenhouse gas meet energy need