Home Blog Page 3

Semiconductor Legislation: The World and The Supremacy Battle Between The U.S. and China

0

Ever since the beginning of Gen-Z, Semiconductors have been upgrading, updating and emerging in every aspect. AI, technology, automobile, robotics and every other ultra-modern advancement has semiconductor fitted. And silicon, being heart of every semiconductor chip component; demand has surged insanely all over the globe. With limited number of manufacturers and considerably less amount of production due to the corona virus pandemic, this high-demand generated huge amount of chip shortage.

World wide semiconductor revenues Year to Year
Global Semiconductor Chip Shortage getting worse (Source: TechSpot)

But the interesting part strikes here. Compared to 2019, there was an increase of 6.5% global semiconductors sales of nearly $439 billion dollars, in 2020. So, what exactly caused this demand-supply mismatch? What is the US planning to do to this which can affect the entire semiconductor industry? Is China playing a role in this situation?

Prior to answering and analyzing all this, let me cover few segments which leads us to understand the international politics and economy.

[the_ad id=”7507″]

The Corona Impact: It is not what it is…

Many scenarios existed in the industry after the COVID-19, the three major ones are:

PCs and Laptops: This segment would see a steep decline in demand and performance gap will increase over time. You must be thinking that, as all of us need PCs for remote work, the demand should increase as sales increase. But the future aspect where most people buy electronics for 2020, demand steeply decreases for next 5 years.

Semiconductor revenues by Segments
A3 scenario means GDP recovers in 4th Quarter of 2020 (Source: McKinsey & Company)

Similarly, for automotive industry, it relies majorly on government policies and incentives. And this won’t be more than 1-5% as of 2021 and thus creates a scenario of lesser demand. And lastly, there was wired communications sector which may or may not see a demand fall, depending upon how optimistically home-schooling and remote work tend to grow or decline.

Simultaneously, the companies which produced automotive chips were idle during the pandemic and this resulted to manufacture of 1.28 million fewer vehicles due to global chip shortage. So, the above figures which saw a growth in semiconductor market largely came from PCs and laptop industry. Yes, Intel alone contributed for nearly $70 Billion dollars in 2020 with chip supply. Other major contributions were from Qualcomm, NVIDIA and Samsung.

[the_ad id=”7507″]

Disruption of Automotive Industry

Due to lack of R&D and insufficient resources for production, lead to disruption of automobile industry. Even the US faced drastic effects of this and its biggest competitor, China became the semiconductor superpower at one point. Economically, China was causing troubles to the US income. This was causing it a $83 billion less income generation as China becomes strong in the semiconductor market.

China in its 14th five-year plan, explicitly outlined the goal of achieving “complete self-sufficiency” in semiconductors. This is all backed by state investments and discriminatory industrial policies. And, the US was angry on being hit by a 18%-point drop in global market share in its long-standing global leadership position. The situation was intense, politics was heating up and US needed to take some action.

The Semiconductor Legislation and Biden Government

“We’re working on that. (Senate Majority Leader) Chuck Schumer and, I think, (Senate Republican Leader) Mitch McConnell are about to introduce a bill along those lines,” Biden said during remarks about his own plan to boost the nation’s infrastructure. And then President held a meeting with CEOs of top-notch companies, including General Motors Co and Ford Motor Co along with White House officials Brain Deese and Jake Sullivan. The meet was virtually held with an agenda to discuss the infrastructure bill and semiconductor shortage.

Addressing Google/Alphabet, GM, and Intel; Biden said that the US must build its own infra to prevent future supply crisis. The White House said in a readout of the meeting that participants had “discussed the importance of encouraging additional semiconductor manufacturing capacity in the United States to make sure we never again face shortages.”

The Semiconductor Legislation and Biden Govt.
The Semiconductor Legislation (Source: mint)

The event came on the heels of Biden’s February 24 executive order calling for a 100-day review across federal agencies on semiconductors and three other key items: pharmaceuticals, critical minerals and large capacity batteries. Policymakers are focused on building additional semiconductor capacity in the United States, but experts say there are limited remedies in the near-term. Biden also pointed to the semiconductor shortage as he seeks to build a case for a $2 trillion infrastructure package.

Fighting China: Its International Politics

While Biden introduced these packages, the ‘China Challenge’ was introduced. This focused on leveraging smart, multilateral and well-tailored policies which can lead the US to compete even more strongly on a global stage. Incentivizing manufacturing of advanced semiconductors in the U.S. remines a key pillar of any such strategy. “I’ve been saying for some time now, China and the rest of the world is not waiting. And there’s no reason why Americans should wait,” the president emphasized.

Fighting China for Semiconductor War
Fighting China (Source: World Finance)

And prior to this meeting, a $50 billion dollar infrastructure plan for a new commerce department office supporting production of critical goods was already handed out by the White House. Side-by-side, a backed congressional legislation to invest another $50 billion dollars in semiconductor research and manufacture was rolled-out.

[the_ad id=”7507″]

Semiconductor War: Impact on India

India has always been on a side-line in such world political matters, especially when technological giants are involved. This impacts India economically and intellectually. The prices for electronic products always increase after such global wars. Trades become hard, imports of goods increase and economy affects in one or the other way. Due to lack of R&D for India in semiconductors market, the skilled and experienced intellectuals tend to leave the country for better opportunities. All this, in turn makes a country like India decades behind the techno-superpowers of the world.

The Semiconductor Chip War
The Chip War (Source: Foreign Policy)
Though, the big question prevails here; if it can impact semi-powerful countries like India in this way, what this can do to even smaller GDPs? Weapons war was over. Biological war is predicted. And, the techno-war is being fought. Think!

[the_ad id=”7507″]

Rare Earth Elements: Insight into China’s Atrocius Strategies

1

We are in the midst of changing world order, but this change is not being induced by bullets and weapons; A tiny yet horrifying virus can be much more lethal and at this point, we have a clear picture of it. As though, this article is not about the virus from Wuhan, it is about a battle that has been brewing below the radar: The Battle for Secret ingredient that will power our future. Call it new gold, new oil or new strategic fuel, it’s about the rare earth elements.

[the_ad id=”7507″]

What are Rare Earth Elements? How do We Describe Them?

Movie directors would describe them as all-powerful elements, metals, perhaps, with all the supernatural characteristics like Mithril from the Movie Lords of The Rings, light as a feather and hard as diamond but supremely found in the mines of Moria.

Unobtainium, a metal that sells for $ 20 million a Kilo from the well-known movie AVATAR, a precious metal found on the distant planet, Pandora.

Imagine yourself living in Wakanda, home to a panther, how would you describe ‘Vibranium’? The movie has engaged its audience by portraying a rare earth element named Vibranium which can absorb any kind of kinetic energy.

The world outside the movies has a different story of the rare eath elements. We live on planet Earth with a virus on the prowl and there are as such no supernatural elements. Rare Earth Elements Do Not possess supernatural characteristics but they already rule our lives. Everything that we surround ourselves with, has got rare earths in variable amounts, from smartphones to supersonic jets rare earths are the secret ingredient of almost every scientific wonder. One country that has already been into multiple controversies is sitting on a pile of such metals.

Periodic Table highlighting Rare Earth Elements.
Rare Earth Elements lie isolated in the periodic table but hold very high significance in the manufacturing industry. Credit: RSC

Do you remember the periodic table, known for not being friendly with most students unless they have special interests in chemistry. Taking a glance at the periodic table, we observe a couple of rows that appear to have been kicked out of the table and seemingly do not fit in and sit ignored. A set of 15 elements of these are called Lanthanides, the chemical elements tucked into the sixth row of the Periodic table. Lanthanides along with Scandium and Yttrium you get what the world knows as the Rare Earth Elements, well, why are they rare and at the same time so precious?

It’s because they don’t appear in big deposits or big mines like other elements do. They are much more evenly spread across the planet which makes mining refining difficult. Despite their abundance, rare earth elements are quite rare to find. One country has roughly 1/3rd of the World’s deposits of the rare earth elements, nearly 36% of the global reserves and as of today, it accounts for 90% of the global production of these elements. China is the country where the rare earth elements are concentrated in a handful of mines in three regions of China: Mines in the province of Inner Mongolia, Southern provinces of Hunan, Jiangxi, Fujian and others and also the central province of Sichuan. These areas account for 98% of China’s total production of rare earth elements. What does China do with them?

[the_ad id=”7507″]

It manufactures a range of devices, here is a rough idea, devices that need rare earths: Air crafts, missile guidance tools, wind turbines and magnets, batteries and Electric vehicles, Plasma screens, LCDs, LED bulbs, Camera, speakers and even fertilizers, they all need rare earths which are the secret ingredient of every scientific miracle and virtually every Chinese product.

China today is the king of rare earth elements.

“The middle east has oil, China has Rare Earths” – Deng Xiaoping

Experts: Xi's Science and Technology Speech Echoes and Updates Deng Xiaoping
Deng Xiaoping had realised as early as 1992 that Rare Earth Elements would rule the manufacturing industry. Xi Jinping who happens to be the paramount leader of the reportedly democratic nation China has not turned down any opportunity to win the trade monopoly in multiple sectors. Image: New America

Deng Xiaoping is the former paramount leader and said it in a speech way back in 1992. Current Paramount and the president for life, the one and only Xi Jinping is hoarding these elements. Rare Earth Elements have become a secret weapon, moreover it’s a trump card in the trade war with the United States. More than 80% of these precious metals used by the United States are imported from China. The Chinese Communist Party (CCP) has hinted that it will restrict the exports to America if the administration continues to impose sanctions. It’s quite a threat! The Chinese regime is using these rare Earth Elements as a Political Weapon, a bargaining chip.

Image depicting trade war in circumstances of Rare Earth Elements industry.
China aims to gain a monopoly in every sector and is ambitious to overshadow the United States. Today the picture of a trade war is very clear amid the ongoing Coronavirus pandemic that originated from Wuhan, China. Image: Asia Times

We have other options too!

There are other countries with large amounts of Rare Earths deposits, if extracted they can feed the need of American and European markets and end China’s monopoly. Which are these countries?

Canada, Australia, Brazil, Vietnam and India. Now here’s the disclosure, India was one of the first countries to recognise the significance of these minerals. Head back to 1950s, India established what is known today as the Indian Rare Earths Limited, a state-owned corporation based in Mumbai. India had a prime mover’s advantage but it lies hidden under the not so rare Indian Red Tape.

[the_ad id=”7507″]

Where does India Stand?

India’s Rare Earth Elements industry lies wasted and underused, the minerals lie buried in Indian soil. India has the fifth-largest reserves of these minerals if some estimates are to be believed. These deposits are reportedly spread over Eastern and all Northeastern states of India. According to one estimate, the domestic supply chain of Rare earth elements in India if revived could potentially be worth rupees 90000 crores in annual turn over, which’s close to $12 billion and it can generate net capital employment of about rupees 121,000 crores or $16 Billion. As far as foreign exchange is concerned, the Rare Earths industry has the potential of generating Rupees 50,0000 Crores for India, more than $ 6 Billion.

The scope for the Rare Earth industry in India is immense, New Delhi must come up with a national strategy to revive it to its full potential, the broken link between industry bigwigs and researchers must be re-established. This revival will give both economic and strategic benefits. It can do wonders for the Governments Make in India plan.

This Pademic has taught the whole world the significance of self-reliance, Aatma-Nirbharta as PM Modi mentions it.

The rare earths industry could be the India’s chance to achieve self-reliance, A chance to demolish China’s monopoly.

Fukushima Nuclear Waste Disaster: People’s Concern or a Radioactive Conflict?

In 1954, Japan was still tip-toeing among the World economies. To boost its economy and provide people with the necessities of life mainly electricity, it decided to invest in Nuclear Energy. Rather than relying on other countries for Fossil Fuel imports, Japan made a bold move to go ahead with a technology that seemed rewarding in the long run and which would make them sustainable in the foreseeable future (Refer to more sustainable energy sources). The Officials and Govt. promised to emphasize the clean-green energy fact of nuclear while consciously ignoring the alarming terrors that would put a dent in the History of Japan.

[the_ad id=”7507″]

On 11th March 2011, An earthquake of magnitude 9.1 that would shatter the records of previous Quakes hit the Pacific coast of Honshu (The main Island of Japan). Within an hour after the Earthquake, a large tidal wave a.k.a Tsunami flooded the whole 10km area after the coastline. The water engulfed everything in its path leaving behind nothing but a tragic slurry of rubble, concrete, and dead bodies. Ever since Chernobyl, Fukushima is the only other accident classified as level 7 on International Nuclear Event Scale (INES).

Aerial view of Fukushima Daiichi Nuclear power plant
Aerial view of Fukushima Daiichi Nuclear power plant, showing reactor buildings. [Source- wikipedia]

Fukushima Nuclear Disaster

The Fukushima Daiichi Nuclear Plant was located in Ōkuma, one of the towns in the Tōhoku region of Honshu. One of the largest operating plants built by TEPCO since 1974, it contributed a significant amount of electricity grid output. Bearing in mind the natural disastrous history of Japan, the plant was designed to withstand a max. of 0.46g-forces and a large concrete wall was built to prevent flooding of the plant by Tsunami of height 5.7m. But little did they guess that their measures would fail horribly, resulting in one of the worst Nuclear Waste disasters in the history of Earth.

Height assessment of Fukushima Nuclear power plant.
The height of the tsunami struck the station approximately 50 minutes after the earthquake.
A: Power station buildings B: Peak height of tsunami C: Ground level of site D: Average sea level E: Seawall to block waves. [Source- wikipedia]

The Earthquake generated g-forces of up to 0.56 which crossed the stress limit of every reactor. The Tsunami that crashed the plant was approx. 17m in height, 3* times larger than the estimated value of Safety officials. Every building in the Nuclear plant was flooded and rubble lied everywhere. At the time of the accident, the Fukushima Nuclear plant housed 6 Boiling Water Reactors (BWR) of which reactor 1,2, and 3 were operating in full swing. The core in reactor 4 was unloaded and reactor 5 and 6 were in a Cold Shutdown state. After the quake was detected by the reactors, reactors 1,2, and 3 were immediately shut down with the insertion of control rods closing down the fission in a controlled manner.

[the_ad id=”7507″]

As reactors 1, 2, and 3 went offline, they could no longer produce the electricity required to operate the cooling systems, the emergency diesel generators came online. Until now, everything was operating according to the Emergency drill. But the Tsunami flooded the building containing the diesel generators leaving them in a turmoiled state. Only the backup generator for reactor 6 was operational which could be pressed into service and use to keep reactor 5 and 6 cooled.

Sketch of BWR used in Fukushima Nuclear plant.
Sketch of BWR used in Fukushima Nuclear plant. DW = Drywell
WW = Wetwell
SF = Spent Fuel Pool
light blue with pink dome = Reactor Pressure Vessel
purple = Secondary Concrete Shield Wall. [Source- wikimedia.org]

The BWR’s are designed to heat the water to steam directly which in turn drives the turbine to generate electricity. In BWR there is only one single system which that combines feedwater and steam. Reactor 1 was designed a little differently than the rest of the reactors. Reactors 2 and 3 had steam-driven pumps that cooled the reactor’s core by forcing steam into the Wet-Well Suppression pool tank. But in reactor 1, there was no pump but a conventional heat exchanger that used convention and gravity to cool the steam. However, due to excessive cooling, the worker decided to isolate the heat exchanger.

After Tsunami when devastation was at its peak, the worker tried to reactivate the heat exchanger, but the efforts went in vain. The temperature inside reactor 1’s core was increasing due to residual heat. Residual heat is the heat generated by the decay of radioactive material formed after the fission is stopped.

Cross section of an employed BWR.
Cross section of an employed BWR. [Source- World-Nuclear.org]
Nuclear Reactor Meltdown GIF.
Nuclear Reactor Meltdown GIF. [Source- wikipedia]

The water level began to drop in reactor 1. It dropped so low, such that the top of the Fuel rods became exposed to dry air. Heat begins to generate on a massive scale. Workers however were able to restore emergency power to cool the reactor. Unaware of the massive boil-off inside the reactor, heat converts water into steam in large quantity raising the pressure exponentially. Once again, the core is exposed to dry air and temperature reaching up to 2300°F, fuel rods begin to melt forming a slag of highly radioactive material called “Corium.” Reactor 1 had a ‘Meltdown.’  

Hydrogen Explosion in one of the 4 main reactors.
Hydrogen Explosion in one of the 4 main reactors. [Source- researchgate/Flavio_parozzi.net]

The precarious pressure levels inside Reactor 1 threaten to explode the reactor at any time. The plant workers take a gamble by releasing the slightly radioactive steam by venting. Until then the pressure and temperature inside exceeded the critical levels, and the Zircaloy making up the control rods reacted with steam to produce hydrogen gas. It began to leak through uncontrolled pathways and caused an explosion at top of the building not damaging the containment building. But the major problem was that the spent fuel pool build at top of the reactor was now exposed to the environment.

[the_ad id=”7507″]

Reactor 2 and 3 began following the same pattern ever since the cooling systems went offline. As the pressure level rose, the depressurizing valves opened and steam was vented into the wet-well suppression pool tank. The water acted as a good filter by trapping the radioactive elements in it. With water not being cooled, it began to boil thereby reducing the filtration capacity. Fearing containment rupture, the worker released the steam into the atmosphere but hydrogen also began to leak in the building. The H2 explosion blew the top frame of the building exposing the spent fuel pool. The core of reactor 3 melted and Corium formed a drop to the bottom of the reactor vessel. The explosion of hydrogen caused serious damage to the cooling system of reactor 2, the meltdown began in reactor 2.

Explosion at Fukushima Daiichi Nuclear Power plant.
Explosion at Chernobyl Nuclear Power plant. [Source- theguardian]

The temperature in reactor 2 skyrockets as the cooling system is damaged. Once gain the build-up of hydrogen gas causes an explosion in reactor 2’s containment building. The condition of reactor 2 is so much worse that it suffers a full-blown Meltdown. The Corium cuts through the pressure vessel and falls on the concrete layer of the reactor. Loss of leak-tightness leads to the discharge of unfiltered radioactive nuclear waste into the atmosphere and nuclear wastewater deposits in the soil.

More Detailed Explanation of Fukushima Accident.

[the_ad id=”7507″]

Aftermath and Prevailing Danger of Nuclear Waste

The main concern arises after the assessment of the damage report that the authorities were notified of the old tech used in the safety systems but after factoring in the cost of improving them TEPCO decided to run with the older systems. [Source- carnegieednownmnet] Moreover, the emergency systems had never been tested after the installation of the plant since 1974. The decision of Plant Boss Masao Yoshida continued to use seawater to cool the systems even though TEPCO ordered to shut it down. This decision led to discharging of approx. 18000 Terabecquerel of nuclear wastes in the Pacific Ocean.

Radiation levels in Fukushima soil.
Radiation levels in Fukushima soil. [Source- Rueters]

The radiation levels detected in seafood and crops rendered them useless for eating. Many farmers and the whole fishing community couldn’t sell their produce for nearly 36 months leaving them jobless. The decision affected 1,54,000 lives and an estimated loss of nearly 21.5 trillion yen ($187 billion). The clean-up of the site would take an estimated 40 years! Though the contaminated water is not being released since 2015, more than 1 million tonnes of water was used to cool the melted reactors. Japan has now decided to discharge the treated nuclear wastewater into the sea. This decision has spark outrage among the local people and the Fishing community of Fukushima. Years after the incident they finally managed to convince people that Fukushima food is safe and Japan’s decision would put their livelihood in peril.

Fish dying due to radiation exposure by nuclear wastewater.
Fish dying due to radiation exposure by nuclear wastewater. [Source- huffpost.com]
Significant drop of nuclear power to produce electricity in Japan.
A significant drop in use of Nuclear power to produce electricity in Japan after the incident. [Source- Wikipedia]

The decision sounds so delicate, that debate between government and people could turn the tide into a conflict any second. South Korea and China are against this decision but the U.S backs Japan over the debate. Moreover, the Plant Clean-up facility will run out of storage tanks for water in late 2022. With a large chunk of radioactive Corium waste remaining, they would ultimately have to consider some more viable options. Decisions like these affect the behavior of the public towards acceptance of modern technologies. Nuclear energy is the key to a greener if not renewable, future source of energy. The whole mumbo-jumbo boils down to just one question:

“Do we Need Nuclear Energy to battle Climate Change?”


Let me know your views in the comments below, and till then keep reading!

Muons: The Anomalous magnetic moment leading to new physics

0

Nowadays everyone is talking about the experiment done at Fermilab on the anomalous magnetic moment of the muons that might point to new physics. So now we are going to get into a hell lot of scientific terms like magnetic moment, virtual particles, Feynman diagrams and so on. So be ready for what’s coming ahead.
When an electron is revolving, it creates a magnetic field behaving like a magnetic dipole having a north and south pole. It gives rise to what we call the dipole moment (μ). Dipole moment is equal to the electron current (I) times the area of revolution (A). The revolving electron also has angular momentum (L) = mvr.

The ratio of the dipole moment to the angular momentum calculated comes out to be

The muon and their magnetic moment
(Source: BYJU’S)

[the_ad id=”7507″]

Finding the reality with Muons

But this has been calculated neglecting the various quantum mechanical phenomena. According to Quantum Mechanics, Elementary particles have “intrinsic Angular Momentum” known as the spin angular momentum (S). Thus, The quantum mechanical calculation gives the value as,

where g is called the g-factor of the particle.

Now consider a muon going in a particular direction. It interacts with the magnetic field present in that region via virtual photons which are the mediators of the electromagnetic field. A muon in the magnetic field absorbs a photon and gets scattered. This can be represented by the Feynman diagram as

Feynman Diagram and Muon

Now there is a small thing you need to know about the behaviours of charged particles in a magnetic field. In the presence of an external magnetic field, the direction of the spin will precess around the magnetic field. It will rotate with the axis being the direction of the magnetic field like a spinning top precesses around the axis of its rotation. The rate of precession is directly proportional to the magnetic moment and the applied Magnetic field (Rate ∝ μ*B). For now, we are going to call the direction of spin as the direction of spin magnetic moment for obvious reasons.

[the_ad id=”7507″]

The Feynman Diagrams and Connection to Muons

The g factor for the muon calculated from this single-photon interaction comes out to be ‘2’. But according to the theory, this is not the only contribution. From quantum mechanics, vacuum is not empty. It has a large amount of fluctuating energy here and there. Every now and then, virtual particles are being created and annihilated in a very short time. So theoretically the interaction of the muon with the magnetic field can happen through many other possible ways.

Feynman Diagrams and Muons
(Source: ResearchGate)

Every next process in the diagram is much more complicated than the previous process, but the more complicated it is, the less is its probability of occurring and hence its contribution to the g-factor gets lesser. The theoretical value of g is more than 2 and is equal to 2.00233183620.

Getting back to the experiment done at Fermilab

A beam of muons is sent through one direction, all having the same direction of the spin magnetic moment. It is passed through a circular region creating a uniform magnetic field. Due to the force of the magnetic field, muons start revolving inside the circular chamber at a very huge speed comparable to the speed of light. Due to the effect of the magnetic field, the spin magnetic moment of the muon starts precessing. By measuring the rate of the precession, one can calculate the magnetic moment of the particle and thus the g factor.

Now, a few things about muon,

1. Muon is 200 times heavier than the electron, so it very quickly disintegrates into a positron (electron with a positive charge) and two neutrinos (electron neutrino and a muon neutrino). But as the Muons are moving so fast, the time gets dilated for them and this process occurs after a slightly longer time.

Muon and time dilation

[the_ad id=”7507″]

2. Due to a phenomena called Parity Violation, electrons (or positrons) get emitted into one direction (i.e. the direction of spin magnetic moment) with more probability than in other direction. The Energy of the electrons emitted in the spin magnetic moment direction is slightly higher than the electrons emitted in any other direction.

But due to the precession of the Spin Magnetic moment due to the presence of the Magnetic field, the Energy of the emitted electrons oscillates at a certain frequency (i.e. the frequency of the precession).

From the frequency of these oscillations the precession frequency can be determined. With the help of frequency the magnetic moment and so the g factor can be calculated.

The value of g can be expressed as

g = 2 + α, where a is a very small number

So, (α = g – 2) is the measure of the contribution of other Feynman Diagrams i.e. the significance of more complicated processes.

The Real Questions

The theoretical value which was calculated long back and the experimental value determined in the Laboratory do not match quite well. What could be the reason for this? It’s a very big question than you can imagine. There can be many possibilities but few in which we are interested in can be one of these:

1. There can be new particles (which are yet to be discovered) which are created as virtual particles in the process.

2. There can be a new force among the interaction of these particles through which they interact.

3. There can also be a possibility that we need some more Experimental data to get more and more close to the theoretical value.

So, Now Fermilab is running the experiment in five phases to get more data to get to the final answer. Whether it is the whole new physics that will come out or the confirmation of our solid theory (Standard Model) we do not know now for sure.

[the_ad id=”7507″]

The Rise of COD mobile

1

After selling millions of copies of various Call of Duty games on PC and other gaming platforms, Call of Duty franchise entered the mobile gaming scene with its very own game Call of Duty Mobile (COD Mobile) in the year 2019. The game was initially released as a closed beta and was later released globally on October 1, 2019. The game became an instant success and the game (Activision release) has witnessed one of the largest mobile game launches in history, generating over US$ 480 Million with 270 million downloads within a year and these numbers are only from the global version. In other regions, the game was released by its other Publishers like Garena, Tencent and VNG. Some of the features that led to the growth of COD mobile are discussed at the end of the blog, so be sure to read the whole thing.

The developers  of the game have tried to bring a similar gameplay experience from their various games to Call of Duty mobile and have seemed to be pretty successful at doing so. The Game has various modes and maps to choose from and play and mainly is divided in two main parts: Multiplayer (MP) and Battle Royale (BR).

Multiplayer

The Multiplayer aspect of this game is what that makes it unique and stand out from its other competitors like PUBG Mobile and Garena Free Fire. Multiplayer mode is a first person shooter similar to its other COD titles. In core game modes like Team Death Match, Search and Destroy, Hardpoint and Domination which all are 5v5 matches, so players can choose to play either non-ranked matches (public matches) or ranked matches if they opt to be a bit more competitive.

Also every season, there new featured modes and new maps which are added. These modes include various famous modes like Attack of the Undead, Juggernaut, 3v3, 10v10 and many more, which are from the previous COD titles. In the MP mode there are other features like score streaks and operator skills that one can use to get a strategic advantage over their opposition team. In addition, there is a wide arsenal of weapons that the player can choose from. The weapons range from a knife to Rocket launchers, Trip mines and Thermites to EMP grenades, AR’s like AK 47 to Snipers like DLQ 33. One can even buy lucky draws and crates to get some epic or legendary skins and to show them off during your battles.

[the_ad id=”7507″]

Battle Royale (BR)

(Source: Gameranx)

The Battle Royale is also something to look out for in the game. In this 100 player game mode, players can choose to play solo, duo or with a squad. The Battle Royale map is played on a beautiful map called Isolated. There is also one other map called Alcatraz but it is a Limited time mode. You can traverse the Isolated map using various vehicles like cars, ATV’s, helicopters and even zip lines and ropeways. In the very recent update, a new vehicle truck was added which seems to be deadly and can instantly kill or knockdown if one is run over by it.

(Source: Reddit)

Another interesting feature of the BR in Call of Duty Mobile is its classes. You can choose from various classes as shown in the picture above. These abilities need some time to recharge after every use but once activated can aid you in numerous ways depending on the class that one has chosen. Also, in BR there are ranks and each rank gives the players special rewards. These ranks of MP and BR are both reset every two months when a new ranked season is introduced with new rewards. There is also the normal season which is introduced every 45 days and with it, it brings a new Battle Pass, Guns, Maps and modes.

[the_ad id=”7507″]

Gunsmith

The recent addition of the Gunsmith feature is what has given COD Mobile a huge boost to its popularity. None of the other mobile games out there have this kind of customization and weapon stats. With the addition of Gunsmith players can customize their weapons to their content. By customization it doesn’t just mean cosmetic customization like skins or stickers or charms, but actual real customizations.

(Source: BRGeeks)

Players can control almost every aspect of their gun and alter it up to their hearts content. One can literally change the recoil, damage, ADS speed, ADS bullet spread, hip fire and many more aspects of their gun. There are almost various attachments for each gun that you can use like various grips, stocks, barrels, optics, perks, lasers etc.

The Gunsmith has not only brought a new experience to the mobile gaming platform but has also attracted many new players/gamers who seem to be interested in this new weaponry of COD Mobile.

[the_ad id=”7507″]

Graphics and other

(Source: KTMX)

Some of the final features that attracts many players towards COD mobile is its graphics and realistic physics and movements. The graphics in COD Mobile are beautiful and their optimization is pretty well done. As for the movements and physics, which all the previous COD titles had excelled in them, COD Mobile is no exception. The movement is very well optimized and very smooth. The game physics and mechanics are nice, especially the gun mechanics. Only the vehicle mechanics needs to be tweaked a bit in BR but other than that everything about the graphics is simply amazing.

[the_ad id=”7507″]

One is guaranteed to love this game if they are fans of first person shooters or battle royale games. The e-sports for this game are also growing at a rapid rate especially due of the COD Mobile World Championship, LOCO India Cup and Mobile Mayhem.

One can download the game from here:

Also those interested other gaming posts read here The Best PUBG Mobile alternatives out there

Chemistry 4.0: The Reinvention of Technological World

2

Sustainability, green technologies and chemistry for better living. Can all this be kept under one roof? After the 2018 Industry 4.0 advancement for ‘smarter’ factories, chemistry as a whole was levitated to a new level. Robust tools were manufactured, novel techniques were inculcated and newest compounds were synthesised; giving birth to the Chemistry 4.0. This was not only limited to industrial chemical engineering and factories, but also reinvented the laboratories of the future. This gave a new framework and reformed the way in which the labs and factories would coexist and work for a ‘better’ world ahead. Taking this further, let me take you through some of the most innovative and latest technological updates of the sector.

[the_ad id=”7507″]

Personalized Medicines for Everyone

Not always a pandemic hits the world. And not everytime the virus or disease is generalized or its variant be common in everyone’s body. There may be genetic individualities and uniqueness to a particular disease or disorder in one’s body. Then why should the medicines and treatment be similar or common for all.

Personalized Medicines for Everyone in chemistry 4.0
Personalized Medicines for Everyone in chemistry 4.0
(Source: Financial Times)

Personalization is the new digitalization in the field of pharmaceuticals. It is revolutionizing entire industry by offering more targeted and specific therapeutics a patient need. In chemistry 4.0, focused is being made to achieve more efficiency by obtaining huge amount of data and analysis for individual patients. Manufacturing of location-specific medicines and developing user preferred therapeutics can lead to more speedy and easy recovery of patients.

Flow chemistry plays a key role in this aspect of chemistry 4.0 revolution. One-stop solution to the manufacturing of personalized medicines can be enhanced using flow chemistry, which increases the efficiency of drug substance development and thus expediting the production of personalized medicines. Thus, with the wealth of digital tools, Chemistry 4.0 continues to gain momentum by improving efficiency and cost-effectiveness and ultimately providing faster access to more tailored medicines.

[the_ad id=”7507″]

Dominating the Lighting with Chemistry 4.0

Solar powered houses, green houses, photovoltaics, etc etc. Over the last few decades this industry has grown with leaps and has been very dominating since past few years in the sustainable market. Unfortunately, the conventional solar cells lack in efficiency and through-put; making it less demanded amongst the consumers. But, with chemistry 4.0 and the digital revolution coming with it, has enhanced a whole new area of photovoltaics – OPV cells.

Dominating the Lighting with Chemistry 4.0
Dominating the Lighting with Chemistry 4.0
(Source: ChemistryWorld)

Organic Photovoltaic (OPV) cells are simply manufactured by combining p- and n- type organic semiconducting polymers and small molecules. This will potentially generate electricity through photovoltaic effect when light radiation corresponding to the energy of the band gap hits the cell. Fine tuning, thinner and light-weight structure and high flexibility makes OPVs a perfect alternative (source) of solar energies of the future. Also, as a power source, OPVs produce the lowest carbon footprint, have shortest energy payback time and at the same time have high power-per-weight ratio than any other solar cell technology.

The most innovative use of this can be using it in a solar-powered smart green-house. In such houses, outer walls are composed of semi-transparent OPVs that absorb light outside the spectrum that plants use for photosynthesis, can provide power for ventilation systems without detriment to plant growth. The potential of OPVs which can be configured with internet of things (IoT) is proving to be revolutionary in chemistry 4.0 applications.

[the_ad id=”7507″]

The Crown of Chemistry 4.0

Semiconductors, AI and IoT. These are the most trending terminologies of the industrial 4.0 revolution. While silicon remained the undoubted superhero of the semiconductor industry of this revolution, there was a need to find its alternative. It was not for its efficiency or cost, but the performance and high-power applications. This is when, Chemistry 4.0 took over the charge.

Semiconductors: The Crown of Chemistry 4.0
Semiconductors: The Crown of Chemistry 4.0
(Source: ChemistryWorld)

The development of novel, high-quality compound semiconductors and taking industry to another level; was done by a German manufacturer Umicore. It produces ‘Trimethylgallium’ (TMGa). It is innovative, safe and unique in all aspects. Offering most sustainable and ecological production, and minimizing hazards and material-loss; this is the most superior of all manufacturing methods in the industry with almost 100% yield. It is an expert in developing and manufacturing high-purity metals and cutting-edge offerings with quality expertise.

[the_ad id=”7507″]

Transformative Technologies

Whats the most effective ways of learning and practicing to be a pilot? Flight Simulation. A simulation to anyone task makes it effective later on to manufacture it and in turn reduce time of production and preserve the resources too. Incorporating continuous direct compression (CDC) into this process can improve efficiency, reduce waste and minimize batch-to-batch variability.

Transformative Technologies in the field of medicinal chemistry 4.0
Transformative Technologies in the field of medicinal chemistry
(Source: ChemistryWorld)

CDC is potentially a flight simulator to manufacture medicinal tablets. It allows the scientist to trial their formulations in silico before they manufacture them in real world. In the manufacturing space it can be the autopilot, providing enhanced feedback and control of the manufacturing operation, using real-time data to maintain control predictively, rather than retrospectively, to understand what could go wrong rather than what went wrong.

[the_ad id=”7507″]

Next-Gen Rapid Diagnostics in Chemistry 4.0

All of us have heard of PCR tests and Rapid Covid-19 tests by now (while some of you may have experienced it). For polymerase chain reaction (PCR) tests, drawback is cost and time. It lacks in both. And drawback with rapid tests is its authenticity and less potential to diagnose asymptomatic patients. This rises the need of more accurate and rapid test system while being effective and authentic.

Next-gen Rapid Diagnostics via Chemistry 4.0
Next-gen Rapid Diagnostics via Chemistry 4.0
(Source: ChemistryWorld)

Here’s where chemistry 4.0 jumped in, with its robust new development. Brightline Diagnostics (DX) was formed which focused on manufacture of conjugated polymer nanoparticles (CPNs). With powerful brightness, producing detectable signals even at extremely low levels, a lateral flow device incorporating these nanoparticles can aid detection of asymptomatic Covid-19 carriers. Powered by these innovative probes, Chelsea Technologies’ fluorometric reader can enhance this detection sensitivity further. The result is a highly sensitive and robust diagnostic platform, known as Claritas.

With such Industrial 4.0 and Chemistry 4.0 innovations and technologies persisting together, the future where we can have our diagnostic kits on our quantum technology-based phones is achievable. There will be a whole new era of technology powered by and for chemistry.

[the_ad id=”7507″]

Laser Cooled Antimatter To Provide New Insights Into Nature.

2

Nature is a weird entity.
Spewing incomprehensible anomalies and unfathomable mysteries in every nook and corner, the universe does really offer a challenge to the people trying to make sense of it. The theories, no matter how accurate they seem to be, are gullible to the tricks of nature. So physicists always look for newer and better ways to test the theories. With advancements in technology, today, we are able to test our theories to unprecedented accuracies and are able to fine-tune our understanding of how the universe works.
On 31st March, the Antihydrogen Laser PHysics Apparatus (ALPHA) collaboration at CERN successfully cooled down antimatter using lasers, opening the doors to measurements of higher precision, allowing us to further our grasp on the machinery of the world. But how was this achieved?

Antimatter…

In 1928, the genius of Dirac stumbled upon a magnificent equation that brought chaos to the physics world. He produced an entirely correct equation, whose prediction of particles with negative energy contradicted common sense. Most of his colleagues all over the world rejected the theory, calling it absurd and senseless. But nature has its own ways. It turned out that these particles are not only the adhoc of a theory, but they do indeed exist. These particles were called Antiparticles, and they constituted the new family of matter called the antimatter.

dirac equation
The most beautiful equation in physics – The Dirac Equation predicts the existence of antiparticles. An anti-electron behaves exactly like an electron, but having positive charge, so it follows the same curve as electron in a cloud chamber except that it bends the other way.
[Image: CERN]

Nature worships symmetries, and thus each and every particle in the standard model has its own counterpart, the corresponding antiparticle. These antiparticles are rare to find and difficult to produce because they tend to annihilate as soon as they come in contact with their matter counterpart. The LHC produces these exotic particles by smashing near light-speed particles which form a particle shower producing antiparticles too. Experimentalists at CERN confined antiproton and antielectrons into an antimatter cloud using magnetic fields, causing them to form antihydrogen. After this, the antihydrogen atoms were cooled down by blasting lasers at them.  

Cooling…

How do you cool down something using lasers? To understand this, first, we need to understand that temperature is directly related to the velocity of the particles. Motion generates heat, the more kinetic energy the constituent particles of a system has, the more is its temperature. The antiparticles produced in LHC starts out at very large speeds – millions of metres per second, corresponding to very high temperatures. Cooling down these particles essentially means slowing them down. One easy way of doing this would be to immerse the antiparticles in the colder matter, just how you place something in the refrigerator, which is colder than the surroundings, in order to cool it. But in the case of antimatter the antiparticles, instead of cooling down, would interact with the cold matter and get annihilated. So the researchers had to come up with an ingenious method, which includes playing with lasers, to cool them down.  Think of the antiparticle as a huge rock advancing at you at a considerable speed. You can slow it down by firing a very large number of bullets at it, the photons, which impart momentum to the antiparticle in the direction opposite to its motion, essentially slowing it down.

The quantum phenomena…

Unlike a classical rock which can be touched by all bullets fired at it, the antiparticle is a quantum particle and hence only photons of a certain frequency can be absorbed by it. This frequency corresponds to the energy required to excite the anti-electron from the ground state to any of the excited states. Photons of any other frequency do not affect the antiparticle at all. The anti-electron absorbs the photon, jumping to a higher energy level (the excited state)  and then emits a photon in some random direction returning back to the ground state. From the conservation of momentum, the antiparticle must now have a velocity slower than the initial velocity it had. Repeating this for a few hours can slow down the antiparticle considerably.

atomic absorption and emission
The emission and absorption of photon happen only for some specific values of photon frequency corresponding to the energy difference of different levels, and not for all values of frequency.
[Image: Socratic]

The doppler effect ….

The one challenge the team faced now was to eliminate the unnecessary speeding up of antiparticles moving in the opposite direction, i.e. in the same direction as that of the photons of the laser beam. To this, they took advantage of the Doppler effect.
Doppler effect is the most common, but the most useful effect in physics. In simple words, if you are moving towards a source of a wave then the wave appears to have a shorter wavelength and a higher frequency, and if you are moving away from the source of the wave it appears to have a longer wavelength and hence smaller frequency.  Since the antiparticles to be cooled down is moving towards the laser beam, it sees the photons as having a higher frequency, and the antiparticle moving in the opposite direction sees it as having a lower frequency. If the laser frequency is carefully set, the beam to be cooled down will see the photon to be of the exact frequency it can absorb, and the beam in the opposite direction will see a much lower frequency and hence will not absorb it. Thus by fine-tuning the frequency to a bit lower than the allowed frequency, we can selectively allow only the antiparticles to be slowed down to interact with the photons, and the antiparticles moving in opposite direction goes entirely unaffected*.  

laser cooled antihydrogen
Doppler cooling of antihydrogen atoms. The team trapped atoms of antihydrogen — the simplest stable atoms that consist only of antimatter particles — using magnetic fields. These atoms move at high speeds within the trap. They then irradiated the atoms with ultraviolet (UV) laser pulses that had a carefully tuned wavelength.
a-Atoms moving towards the laser experience a Doppler effect that shortens the apparent wavelength of the light that interacts with the atoms; the resulting wavelength exactly matches the photon energy that can be absorbed by the atoms. Photon absorption simultaneously excites the atoms and slows them down (cools them). 
b-Atoms that move away from the laser experience the opposite Doppler effect, increasing the apparent wavelength of the UV light. The atoms cannot absorb photons at this wavelength, and the laser beam, therefore, passes through the atoms without causing undesirable acceleration.

[Source: Nature]

After several hours of repeating this, the team observed that the anti-atoms had cooled down considerably, and many of them had attained energies below a microeletronvolt (about 0.012 Kelvin in temperature equivalent*). 

This cooling of antimatter has made it possible for physicists to perform new and precise experiments to study the behaviour of antimatter and also to confirm the currently accepted theories. One can drop it to check its interaction with gravity, or shine light on it to study the energy levels of the anti-atom, or check the interference pattern of the antiparticles, one can perform various sorts of experiments to confirm the currently accepted theories of antimatter. In a sense, we can say that the sky is no more a limit, and we have ladders taller than the sky to help us reach beyond the limits. 

The Coronavirus Variants: Terrifying Phase of the Pandemic

3

When the world started witnessing a significant drop in infection cases, the headline of Coronavirus Variants and different strains overwhelmed most of us. There now seem to be nearly a dozen versions of SARS-CoV-2, which are of differing levels of concern because some are associated with greater infectivity and lethality while some are not. It’s easy to be surprised by this multifariousness and to apprehend that we’ll never attain herd immunity. However, evidence is rising that these variants hold similar orders of mutations. Apparently, this may not be the multifront battle that many are dreading, with an infinite number of different viral variants.

Microbiologists and virologists have turned their focus to learning how SARS-CoV-2 is developing adaptations for multiplying and transmitting in humans. Biologists are often seen using the ‘experimental evolution’ method, wherein they raise varied populations of microbes that originated from the same strain under alike circumstances for weeks or months. They also investigate topics like how antibiotic resistance evolves and how infections become chronic. The dynamism of this approach is that handling multiple populations empowers them to “replay the tape of life” and study how repeatable and ultimately predictable evolution might be.

COVID-19 USA: Coronavirus outbreak measures and effect on the US
China happens to be the origin of SARS-COV-2. The U.S is the worst affected country by the Pandemic. Credit: Pharma Tech

Basic Sciences have helped in Investigating the Coronavirus Variants

One model we see is called convergent evolution, where the same quality emerges in several independent families over time, ordinarily as they adjust to related settings. Some of the best instances of convergent evolution constitute the sandy colour of several desert animals; lobed fins for whales, walruses, and manatees (which are actually distantly associated); and also the capacity of humans to assimilate lactose into adulthood, which occurred numerous times in geographically secluded populations.

In the case of SARS-CoV-2, the entire genome chains of viruses from thousands of subjects allow us to study connecting models. While most variations are one-offs that go obsolete, some set new families that grow more prevalent as the virus thrives in replicating and infecting many people. It is the same part of the virus that frequently mutates in diverse samples around the world and becomes more frequent, this mutation very likely encodes an adaptation that encourages the virus to multiply and transmit.

Coronavirus Variants resemble a Ticking Bomb!

With the advantage of improved genome monitoring of the coronavirus variants, numerous current studies have recognised signatures of convergent evolution. In the U.S.ont of the laboratories found at least seven genetically independent families that gained a mutation at one special spot on the virus’s infamous spike protein, the one it uses to lock onto human cells. Spike has a series of connected amino acids, and the mutation transpires at position number 677. In the initial SARS-CoV-2 this is the amino acid glutamine, shortened as Q.

In six genealogies, this Q mutated to a different amino acid, histidine (H) and is designated as 677H. In the seventh family, Q mutated to a different amino acid, proline (P). Each lineage too has a mutation called S:614G, which was the primary notable variation in the virus to be recognised months ago and expanded so widely it is now detected in 90 per cent of all infections. They are named after common birds—“robin,” for example, and “pelican” —to aid distinguish and trace them, and also to avoid creating bias by identifying them after the regions where they were first discovered.

One approach to envisioning this type of convergent evolution is as a game of Tetris, where a restricted number of building pieces can be grouped in different ways, in different sequences, to achieve the likewise winning structures. For example, it is now identified that the sequence of mutations in B.1.1.7 makes it remarkably contagious and that the B.1.351 family can dodge antibodies because of E484K.

Because several recently identified coronavirus variants appear to be imitating the mutations found in other variants discovered in the initial period, with this we can speculate that the virus is working to run out of fresh, major adaptations. But this doesn’t imply that that the powers of evolution will halt as we commence to approach herd immunity and relax restrictions. History narrates to us that viruses can evolve swiftly to dodge barriers to transmission, especially when infections cover large populations. We need to realise that the more infections there are, the more chance mutations will occur, and those that best help the virus to survive will propagate. This is why preventing new infections is key. These viral modifications are already editing our biology textbooks on convergent evolution; let’s aim to limit new material.

Is the New, More Contagious Coronavirus variants  'Ticking Time Bomb'? | American  Council on Science and Health
This image indicates the Coronavirus Variants found in different countries and mentions ths sequence of infamous spike protein . Credit: ACSH

Countries Recognised the Need and Significance of Scientific infrastructure

It’s also demanding that we make meaningful investments in creating an early-warning arrangement to detect new SARS-CoV-2 variants as well as many other rising pathogens, both recognised and yet to be discovered. Viral genome inspection and sequencing is the solution. The reason why several variants have been identified in the U.K. is because of ideological investments by researchers and state health executives in these technologies.

In the U.S., a notable investment of money to the CDC from the recent federal stimulus package is already improving the numbers with which researchers can sequence and interpret virus samples. This must be supported by strengthening the federal health expertise and research foundation to decode genetic variations in the virus and predict the need for future vaccine modifications. It was fundamental science that offered hope in this pandemic through new vaccine technology; and given renewed assistance, it will also be our guardian against future threats.

Animes To Watch Out In Spring 2021

With the end of Winter 2021 which had tons of major hits like Attack on Titan S4, ReZero S2 Part 2, Mushoku Tensei, Horimiya and Wonder Egg Priority to name a few, it was also filled with a few disappointments like The Promised Neverland S2 and Ex-Arm. But the same cannot be said for Spring 2021, which is just round the corner, as it also has a fair share of famous hit sequels such as Boku no Hero Academia S5, Zombieland Saga and Fruit Basket: The Final Season, and also with a good amount of promising new shows such as Shaman King (2021), Edens Zero and Ijiranaide Nagatoro-san  

This list is divided mainly into to parts: Sequels to watch out for and New Releases worth watching out for.

Sequels to watch out for this Spring 2021:

1. Boku no Hero Academia Season 5 (My Hero Academia Season 5)

The highly aniticiapted season 5 of My Hero Academia which will also be the main highlight of Spring 2021 will be being airing from March 27 and is animated by Studio Bones who have also worked on the previous seasons of the show.

This sequel will pick up where it left off in Season 4 and will kick off with the Joint Training Arc, which sees Class 1-A and Class 1-B’s students pitted against each other in a series of team battles that demonstrates just how much stronger each class has grown. In the wake of All Might’s retirement and Endeavor’s fraught new role as the new Number One Hero, the fifth season will likely see Izuku Midoriya and the rest of U.A. Academy’s Class 1-A face new challenges and even more terrifying villain.

[the_ad id=”7507″]

2. Fruits Basket: The Final Season

After years of waiting, the concluding manga volumes of Fruits Basket are at last being adapted into anime. The last season will mainly focus on the curse-breaking and take the relationships between the Sohmas to boiling point.

The final season is expected to wrap up all the story points, just like the manga. The final season is set to premiere on April 6 and is being animated by TMS Entertainment.

3. Megalo Box 2: Nomad

“Gearless” Joe returns in the highly anticipated sequel to the 2018 Ashita no Joe tribute Megalo Box by TMS Entertainment. This sequel will take place 7 years after he became the champion of Megalonia Tournament, the first ever megalobox tournament. The series will be directed by You Moriyama, along with series writer Katsuhiko Manabe and composer Kensaku Kojima, who were part of the team that made Megalo Box.

The sequel will be premiered on April 4 and is being animated by TMS Entertainment.

[the_ad id=”7507″]

4. Zombieland Saga: Revenge

(Source: comicbook)

Sakura Minamoto and her friends are back with their idol group Franchouchou. This season will bring back the undead idols back to life as the further their idol career. After making a name for themselves in the first season, they will be facing a bigger challenge on bigger stages and bigger crowds.

Like the previous season, this season will be having a good mix of comedy and idol performances. The anime is set to premiere on April 9 and is being animated by Studio MAPPA

5. How Not to Summon a Demon Lord Ω

(Source: TVOM)

Demon King Diablo is back or should I say Takuma Sakamoto is back. In How How Not to Summon a Demon Lord, Takuma Sakamoto  aka Demon King Diablo was transported in his favourite game Cross Reverie or rather was summoned in the game by Rem Galeu, a petite Pantherian adventurer, and Shera L. Greenwood, a busty Elf summoner.

The second season will focus on Takuma  as he tries to discover more about his role in this new world. The second season is set to premiere on April 9 and is being animated by Okuruto Noboru and Tezuka Productions.

[the_ad id=”7507″]

New Animes Releases worth watching out for this Spring 2021:

1. Shaman King (2021)

(Source: AniTrendz)

Mangaka Takei Hiroyuki’s work Shaman King is back with its highly anticipated remake as the older anime adaptation of Shaman King that aired in 2001-02 and was animated by Studio Xebec. Though the 2001 version never finished adapting the whole series as the serialization of the manga chapters in Shounen Jump ended in 2004, the anime was left dormant.

The new anime is set to recapture the original story and bring the closure to the series. The anime will be premiered on April 1 and is animated by Studio Bridge

2. Edens Zero

(Source: GamerBraves)

Mangaka Hiro Mashima is getting his latest creation Edens Zero adapted into an anime. For those unaware Hiro Mashima is that mangaka behind the story of the famous anime series Fairy Tail and Rave Master. Though the character design for most of the characters in Edens Zero is the same as his previous works, the storyline is completely different as this is going to be based on spacefaring.

The new anime is set to premiere on March 29 and is being animated by Studio J.C. Staff.

[the_ad id=”7507″]

3. Ijiranaide Nagatoro-san (Don’t Toy with me, Miss Nagatoro)

(Source: OtakuTale)

Ijiranaide Nagatoro-san is one of the most anticipated comedy anime of Spring 2021. The anime is being animated from Mangaka 774 aka Nanashi’s famous manga work with the same name. The show follows high-schooler Hayae Nagatoro who loves to bully her senpai Naoto Hachiouji, who is an aspiring artist.

The two stat to grow close throughout the show and eventually develop feelings for each other. The anime is set to premiere on April 11 and is being animated by Telecom Animation Film which is famous for its  anime work in Lupin III, Kami no Tou and Orange.

4. Tensura Nikki (Slime Diaries)

Tensura Nikki is the comedic spin-off of the famous isekai anime series Tensura or The Time I Got Reincarnated as a Slime. This series has no major relation to the main story but rather it will give a more comedic touch to it as we see the daily life of the character in the nation of Tempest or the Jura Tempest Federation, as how they deal with the various problems that arise in the forest country.

The series is set to premiere on April 6 and is being produced by Studio 8bit, the same studio that worked on the previous seasons.

5. 86

(Source: AniTrendz)

For all the military anime fans, 86 is the perfect choice for you. 86 is the adaptation of the war themed light novel by Shirabi (Art) and Asato Asato (Story). Although the world in the series is involved in drone battles that allegedly don’t have human casualties i.e. obviously a lie as humans from the secret district, 86 are being sent to die in the war.

The show follows characters from this unlucky secret district as they try to complete their missions successfully without losing their lives. The show is set to premiere on April 11 and is being created by Studio A-1 Pictures.

[the_ad id=”7507″]

The premiere dates for the other animes airing this Spring 2021:

(Source: Reddit/AnimeSamaDesu)

thehavok.com

First ‘Mars Inspired’ NFT Digital House sold for massive $500,000

3

NFT Digital House

Corona. Lockdown. Quarantine. Bored. Life of almost everyone of us can be described within these four words since 2019. And almost every other activity that we have done was shifted to online work, and digitalized drastically. According to statistics of ‘Statista’ – A business data platform, there were 4.13 billion internet users worldwide in 2019. This rose to over 4.66 billion till January, 2021; a growth of nearly 350 million internet users in a span of one year. Thus, almost 60% of the entire world population is now completely digitalized and world without internet has now became unimaginable.

Krista Kim: Creator of the NFT Digital House
Krista Kim: Creator of the NFT Digital House
(Source: Kristakimstudio.com)

While we can’t think of living without it, Krista Kim thought of living inside it. A contemporary artist wanted to investigate how people’s interest towards digital life have increased significantly and how digital screens and tools had impact during COVID-19 lockdowns. To do this out of ordinary, she developed a project ‘Mars House’, designed a house entirely rendered digitally and in 3D on a software usually used for creative video games – Unreal Engine. This is the NFT digital house.

Inspired by Kyoto’s architecture built for zen consciousness of the community, the Mars House was designed as a ‘light sculpture’ in May, 2020. This house can be exclusively experienced in virtual reality, and augmented reality environment apps such as SuperWorld in near future.

'Light Sculpture' - NFT Digital House
‘Light Sculpture’ – NFT Digital House
(Source: Complex)

Reportedly it has became the first house to be sold on the NFT market place SuperRare for 288 Ether ($512,000), which is a crypto currency similar to bitcoin. But wait, what is NFT?

NFT Digital House: NFT?

Non-fungible tokens (NFTs) are nothing but unit of data on digital ledger which can be used in exchange for digital creations like digital artworks, video games, music files, etc. While digital files are themselves infinitely reproducible, NFTs representing them are tracked on their underlying blockchains to provide their buyers a proof of ownership. These types of tokens have swept the entire online collecting world and are an offset of the boom cryptocurrencies.

NFT Digital House: What are NFTs?
(Source: Investment U)

 Basically, if see in economics, fungible means interchangeable; like cash. While using cash, a Rs. 2000 note is same as another Rs. 2000 note and can be exchanged without losing any value. But here, NFTs are not interchangeable, are unique and used to prove that an item is one of a kind and are aimed to solve the problem central to digital collectibles.

Recently, if you follow the news, then Jack Dorsey has sold his first ever tweet – “Just setting up my twttr” – for $2.5m, sold in the form of NFTs!

The Mars House

The Mars House is a ‘light sculpture’ with LED substrate extending to the pool, which spreads throughout the entire perimeter of the house. It is extensively designed to reveal a soothing healing atmosphere. It has open plan and all floors, ceilings and even furniture were envisioned in glass. This glass will be a tempered printed high-quality glass from Italy, and NFT physical pieces and fabrics will be made from renewable materials. Also, beautiful hues can be seen all over the construction, enhancing the experience as a whole.

A Luxury Sitting with an amazing 'Mars' view from a NFT Digital House
A Luxury Sitting with an amazing ‘Mars’ view from a NFT Digital House
(Source: The Sun)

This house potentially extends the idea that screens can be used as digital instruments of well-being and future of our entire housing can be healed and empowered with art and technology. The amalgamation of these can relax, help, and refresh the mind; providing excellent quality of life. Interestingly, digital artworks and sculptures which can probably be used to design and decorate such digital houses have already been sold through NFTs.

Virtual furniture, architectural renderings, digital arts in jpeg, and beyond these also; there are numerous such crypto arts which are sold through NFTs for millions and billions of dollars. Such an interest of virtual designs tends to grow and signifies that how world has changed itself and the human thought-process of getting artforms like these.

Meet APC: A Leading-Edge Technology to boost Old Combustion Engines

Combustion engines—a thing of the history? Think again! Researchers at the Eindhoven University of Technology have discovered a way to make this unsustainable energy source set for the future. The secret is the noble gas Argon. Their main hurdle now is to find a method to ignite the gas blend at just the appropriate instant. Or as Jeroen van Oijen, a researcher at the Department of Mechanical Engineering, puts it “All we need to know is when to start the stroke.

Combustion engines, both internal (diesel and gasoline) or external (steam), have long been the chauffeurs of the advanced world as we know it. However, environmental concerns about damaging CO2 and nitrogen oxide emissions have moulded this fossil fuel-driven engine into a questionable legacy, frequently quoted in the same breath as coal-fired energy stations. It reveals why many researchers striving for sustainable power generation wish to look beyond combustion as a medium for generating energy.

Do Combustion Engines Still have a future?

Plenteous efforts have been made to revolutionize the combustion engine for the 21st century. To achieve this, engineers and groups of researchers have been exploring a decent number of precise improvements for combustion engines. One promising promenade is the Argon Power Cycle (APC). APC is a neoteric technology, that engages argon instead of air as the working fluid. Combining that with hydrogen as fuel, we can have the potential for an extremely efficient machine, that is not only emission-free but can also aid store green energy from solar and wind.

How does Argon Serve the Purpose Despite its Inert Nature?

Argon is a noble gas, which means that it fails to react with other gases. Another notable aspect is, it is mono-atomic: it is composed of only one atom. This is a pivotal advantage over the air, which is essentially composed of diatomic molecules like nitrogen and oxygen. Fighting shy of the concepts of thermodynamics, unpretentiously speaking, If you compress air, as in normal combustion engines, the molecules in the air start to vibrate and rotate. This drives the loss of some of the input power, which is deposited in the molecules as internal energy, instead of all the energy being utilised to maximize the kinetic energy required to actuate the piston. By using Argon all input energy is turned into internal pressure in the piston cylinder. This indicates that the efficiency can be improved by nearly 25%, reaching the numbers close to 80 %!

You may wonder why we are still using air in the engines of our motor vehicles if argon is such a magical gas. Well, the air is abundant. It’s all around us. But, more importantly, it already includes the oxygen you need to burn the fuel and convert the chemical energy into heat. If you switch to argon, you have to inject the oxygen separately. Owing to the magical nature of argon, the obvious question makes room, as to why are we still refraining from the use of Argon in the engines of our motor vehicles? Oxygen is the catch in this case, as air is always around us and is rich in oxygen, which burns the fuel and inturn converts chemical energy into heat and so on. Switching to Argon would require a setup to inject oxygen which doesn’t seem good for efficiency figures.

Where do We Find Reportedly Magical Gas Argon?

Liquid oxygen manufacturing plant with side product as argon gas needed for experiments on combustion engines.
A cryogenic air separation plant for oxygen and nitrogen. Side products of such separations include large amounts of Argon gas. Image: Oxygen Planet

Argon is extensively available in air and can be economically derived from the air as side produce of cryogenic air separation. What’s more fascinating is we only require to separate argon from the air once. Because argon is a noble gas, it goes into the combustion process without reacting with other gases, so in the end, you have almost all of the argon back. The only thing required is to cool it down to eliminate the water that emanates from the combustion process. On introducing it into a closed system, it can be recycled repeatedly, in an everlasting closed loop.

What else do We Need Apart from Argon?

As brought up earlier Hydrogen gas can be engaged instead of fossil fuels like diesel or gasoline, Hydrogen has two principal benefits: when it reacts with oxygen, we obtain plain old water as a final product, of harmful CO and NO. Furthermore, this is really essential: hydrogen is a very assuring storage material for green energy. This means you have sustainable energy at your control whenever you necessitate it, and not just when there are sun and wind!

However, we are constrained by some practical obstacles to reach a point where extensive use of such argon-hydrogen based combustion engines becomes common. The most challenging hurdle: to get the combustion initiated, we need to inject fuel and oxygen into the combustion system. This is the main kick to get everything going. Ideally, we are supposed to inject the fuel and oxygen when the Argon is fully compressed, as this gives the most high-grade efficiency. But substantially, it is more manageable to inject the reactants before compression. However, argon tends to heat up very swiftly when it’s compressed, such a mixture burns before reaching the optimal pressure. Certainly, this affects the efficiency of the system. To explain the problem, picture a swing on a playground: you need to push exactly at the moment that your work creates the most momentum, and not one second quicker or later. Likewise is true for the piston in the combustion engine.

What is the Possible New Tech to be Employed for Proper Combustion in Argon based Combustion Engine?

There’s still life in the old combustion engines. Meet the Argon Power Cycle
Schematic overview of the Argon Power Cycle with hydrogen as fuel. Credit: Eindhoven University of Technology

This research explores three possible solutions; The first one seems to be the most hopeful, as it matches the method used in a diesel engine. It involves injecting the hydrogen only after the argon and the oxygen have been completely compressed. This bypasses unanticipated ignition, but there are still some queries to solve. It turns out that it’s considerably arduous to introduce hydrogen into a compressed gas because it is remarkably light.

A second alternative to be explored is to solve this problem by infusing hydrogen and argon when the pressure is quite low and introducing oxygen, a more massive gas than hydrogen, at the following step when the pressure is high. The difficulty here is that this has never been examined before and that oxygen at high pressure tends to react with the metal of the injector, to corrosion.

Finally, we gaze into the alternative of introducing both hydrogen and oxygen at the following stage in the cycle. Here the hurdle is to adjust the injection of the two gases to assure that both gases can encounter one another and react at the proper moment.

What could be the main application of this engine?

APC Combustion engines to be employed for power generation.
APC Combustion engines to be employed for electricity generation. Credit: International Renewable Energy Agency

Solving these challenges will avail us of a combustion engine that runs on sustainable fuel and would be emission-free. The First and foremost use of APC engine will be used for the production of electricity, using the power that is from wind and solar and stored in hydrogen. But the APC engine can also be employed with natural gas or biofuels. Apparently, it will then no longer be carbon-free, but the great thing about the set-up is that it is closed. The aforementioned makes it much more easygoing and cheaper to capture CO2 emissions. These can then be used as an ingredient for the chemical industry. To filter out CO2 we use a complex membrane that only has an inadequate effect on the overall performance of the system.

This 1 Amazing Equation May Change Your Perception About The World

“Mathematics is the Language of Nature.” [Galileo] The real beauty of Mathematics lies in its ability to be deterministic and not just predict coincidences! This ability to predict the outcomes beforehand provided Humans a certain sense of power over nature. Yet when a simple deterministic non-linear equation spiraled into chaos, the whole system of beliefs crumbled as minds collapsed into madness. It was no logical fallacy, but Nature’s way of unfolding the mysteriousness that lies beyond human comprehension; ‘The Logistic Map Equation!’

This equation not only sparked debates but its appearance over a broad range of fields that aren’t even sparingly related startled mathematicians. The equation popped up at so many places, that it began to appear spooky just as the “Fine Structure Constant.” Let’s examine the chaos, shall we?!

A Mathematical Coincidence

The Logistic Map equation first gained popularity when it was introduced in 1976 in a paper published by Biologist Robert May. The purpose was to create a demographic model that would depict the rise and fall of the population and predict future population values. Mathematically, it’s written as:

xn+1=rxn(1-xn)

where xn is a number between (0,1] that represents the ratio of existing population to Tmax. possible population. The parameter ‘r’ dubbed as ‘Growth Rate’ lies in an interval of [0,4].

The equation may not sound so familiar but its graphical representation captures fascination but at the same time is mysterious. The Cobweb Plot presented in below video depicts qualitative behavior of One-dimensional iterated Logistic Map function, meaning it shows long term status of initial conditions when reiterated over and over.

Cobweb plot and Population vs Growth rate graph.

Let’s start with the population of rabbits, a group of rather active bunnies! Assuming that bunnies die due to starvation only and not any other natural or artificial cause. We will start with a value of xn 70% and growth rate as r=1.75, plotting a graph of population vs time, after many reiterations, we notice that population stabilizes at a certain value of xn+1 ≈ 0.4285714 called “The Theoretical Maximum.” The maximum indicates that the environment can support only this population with a margin of only 1% error (Give or Take!).

Population vs Growth Rate graph and Bifurcation diagram of Equilibrium Population vs Growth Rate.

Well, this is just the tip of the iceberg, if we vary the Growth rate keeping the initial population constant, the Bifurcation diagram which is expected to be normal, that graph went crazy from 0-100 real quick!

The main remarks observed here are:

  1. When r is in the range [0,1], the population will die, independent of the initial population.
  2. When r is in range (1,2), the population will quickly approach the value (r-1)/r, independent of the value of the initial population.
  3. When r is in the range [2,2.95), the population will approach the value (r-1)/r but will fluctuate around that value for some time.
  4. When r is in the range [2.95,3.44] the graph splits exactly into 2, the population will permanently oscillate between these two values.
  5. After r falls in range (3.44,3.54) the graph splits into 4, and the population will permanently oscillate between four values.
  6. After r > 3.54409 the population oscillates with a period of 8 (r=3.55), then 16 (r=3.588), 32, etc… and just when r approaches 3.59995, boom CHAOS!! There’s no cycle here no pattern just random numbers with no relation other than the equation.
Different Scenarios and its Visual representation when value of ‘r’ changes form [0,4).

The Bifurcation graph falls in the category of ‘Period-Doubling Bifurcations-Z’ since the periods change with varying values of ‘r’. The chaotic behavior provided the first-ever method to generate pseudo-random numbers from a deterministic machine. It’s pseudo because if you knew the initial values, you can calculate the number.

Now, if your observation skills are as polished as Sherlock, you might notice some gaps in the Bifurcation graph. These regions that exist between chaos are called “Windows/Islands of Stability.” These windows come in any period you want like 5, 6, 12, any number you just need the right value of ‘r’. At r = 3.63, the period is 6, or r = 3.74, the period is 5 and at r = 3.83 period is 3. This type of behavior is termed as ‘Pomeau-Mannville Scenario’ characterized by periodic phases with bursts of aperiodic behavior.

Windows Of Stability

Logistic Map Equation; Window of stability at r=3.63 with Period=6.
Logistic Map Equation; Window of stability at r=3.74 with Period=5.
Logistic Map Equation; Window of stability at r=3.83 with Period=3.

Experimental Evidence of Logistic Map Equation

The mathematicians after discovering this chaotic scenario went blank with no explanation like they had hit the wall! With no light in sight, they decided to this experimentally. Now, controlling the ‘Growth rate or Breeding rate’ of a population was something they couldn’t afford, therefore Fluid mechanics came into the picture. The Fluid dynamicist “Libchaber A.” published a paper in which he created a sort of Schrodinger’s box with two-counter rotating cylinders containing mercury. He used a small Temperature gradient, and he measured the temperature using a probe at the top. So far, all good, but when he tried increasing the Temperature gradient, at some point he noticed that the values oscillated permanently with a period of Two. Then he got a period of 4, 8, and after some point he encountered Chaos.

Vraition in Probe Tempearture of Mercury. Libchaber experiment and Temp vs gradient graph.
Period doubling cascade in mercury, a quantitative measurement. [Source- researchgate.net]

This was not the only experimental evidence, two biologists in 1872 published research where they compared the blinking of the human eye to that of a salamander’s (Idk, what inspired them!). What they found was when the frequency of light hits a certain point, our eyes only respond to alternate flickers, meaning a period of Two. The plot of Flickering vs Frequency followed the whole Bifurcation diagram, which showed that the Equation is capable of predicting the Neural Firing pattern up to a specific point.

Bifurcation Diagram from Human eye flickering vs Slamander flickering experiment.
Synchronous period-doubling in flicker vision of salamander and man. [Source- JNP Journals]

Like if all this was not astounding for you, the human heart followed the dynamics of this equation too! In another study conducted in 1992, scientists gave rabbits a drug that made their hearts go into Fibrillation (Don’t know what’s the deal between scientists and rabbits!), a condition where your heart beats in such an asymmetric way that it’s really not pumping any blood, and if not cured fast, you won’t last! They found the Period-doubling route to chaos. Rabbits started at periodic beats, after time the period doubled, then quadrupled, and finally, the beats reached the state of mayhem. Amazingly, the scientists knew that there were ‘Windows of stability amid the frenzy, and they calculated the exact conditions of when to apply electrical shocks to return it to periodicity. So they used something like Chaos to control a Systematic Entity, that’s really beautiful!

Period Doubling route to chaos in Cardiac muscles.
A Garfinkel, ML Spano, WL Ditto, JN Weiss. Controlling cardiac chaos. [Source- Gatech.edu]
Period Doubling route to Chaos in Cardiac muscles.

The Feigenbaum Constant And Equation-Universality

Feigenbaum constant and Universality of Equations. [Source- theworldismysterious.com]

Now a physicist called “Mitchell Feigenbaum” thought he would take a shot at the Logistic Map! He observed when the bifurcations occur, and then he divided the width of each bifurcation section by the next one! He discovered that the ratio closed in on a value of “4.669.” This constant was not related to any fundamental discovered constant, so it was coined as ‘Fiegenbaum’ Constant.’ What he had discovered was true universality. Not only Logistic Map Equation but any other single hump function reiterated would follow the same dynamics as this.

The theory of Universality springing from the onset of Chaos tells us that simple functions and models that are deemed to be predictable hold fragments of Uncertainty, fragments that resemble torns in the upholstery of theories, theories that are nothing but revelations of Consciousness from a mere Coincidence.


This Article was originally inspired by this YouTube video from channel Veritasium. For visual explanation and Mandelbrot set relation you can refer here.

You can try the code for interactives yourself. Here’s the GitHub link. Interactives and codes by Johnny Hyman.

Luminous Ether: A Modern Physics Reality that Failed

2

Luminous Ether

In 1864 James Clerk Maxwell discovered the basic equations that govern electricity and magnetism and he also said that both these phenomena correspond to a single entity known as the Electromagnetic field. That was the first step that was taken towards the unification of fields and physicists, even today, are doing their best to combine all of the physical laws into a single entity – The Theory of Everything. Maxwell was the first one to show that the task of unification is not futile.

From his theory, Maxwell discovered that the waves in the electromagnetic field travel at the same speed as that of light. He said that light itself was an electromagnetic wave of a particular frequency. This bold statement of Maxwell was later confirmed by the experiments of Heinrich Hertz.

Maxwell Equations: Luminous Ether
The Beginning of Luminous Ether
(Source: Reddit)

The Beginning

All the known physical waves vibrate in some medium (air, water etc). So it was assumed that the light must also have some medium through which light propagates. This medium was given the name “luminiferous ether”. Many physicists, at that time, gave special attention to the study of the properties of this “ether”.

Maxwell described the subject of this research, in his own words, as “whatever difficulties we may have in forming a consistent idea of the contribution of the ether, there can be no doubt that the interplanar and interstellar spaces are not empty but are occupied by a material substance or a body, which is certainly the largest and most probably the most uniform body of which we have any knowledge”.

Many physicists tried to infer the hypothetical properties of this medium. Since light travels so fast, the elasticity of the medium should be enormous. Since light is a transverse wave, the medium can only be solid and it should have a complex structure as it cancels out the longitudinal propagation. Although the world was filled with a transparent “glass” much harder than steel, this “glass” does not offer even the slightest resistance to the passage of material bodies. Some physicists proposed that the ether behaved as solid for the rapid motions as that of light and for material bodies like earth, it behaved like a fluid, in the manner of certain wax-like solids with deformation-rate dependent viscosities.

The Michelson-Morey Experiment

In 1887, Michelson and Morley prepared a setup to find the velocity of the Earth with respect to the “ether”. They set up an optical racetrack that pitted a light beam moving north and south between two parallel mirrors against an east-west travelling beam. The idea was that the ether wind will give one of the tracks an advantage and this one was certain to win. But no such victory was observed, the experiments could detect no movement of the luminous ether through the Earth.

Michelson Morey Experiment : Luminous Ether
The Michelson-Morey Experiment:
Defining Luminous Ether
(Source: Britannica)

Michelson and Morley’s failure to detect the “ether-wind” led physicists to propose that heavy bodies like Earth trap the ether and carry it along with them. However, attempts to detect such a phenomenon failed. To explain the failure of the Michelson Morley experiment, some physicists invoked the phenomena of “ether-squeeze” It proposed that the motion through ether resulted in a tiny contraction of all physical bodies in the direction of the motion. It said that the light beam would have lost the race would now have to travel a shorter distance due to the squeeze. This was a desperate attempt to save the appearances of the ether.

Although the properties of the ether grew preposterous with each new investigation, no one ever questioned the existence of the ether.

Einstein’s Contribution to the Luminous Ether

In 1905, Albert Einstein published a new theory of space and time. This theory said that only relative motions were of any consequence for the basic laws of physics. There is no physical means by which one can know the absolute motion of any body through space.

In his theory, although space-time were relative concepts, certain other physical quantities were absolute and all the theories are to be made out of these physical quantities alone. Only this way, the laws can be the same for all observers.

Einstein's Contribution to the Luminous Ether
Einstein’s Contribution to the Luminous Ether
(Source: Einstein-online)

One of Einstein’s absolute quantity was the speed of light and the other was the space-time interval. Although space and time were themselves relative, a certain quantity, i.e. the space-time interval made out of them was absolute. This gave rise to the notion that our world is a four-dimensional world, consisting of three spatial dimensions and one time dimension.

The luminous ether, a body that is “standing still in space” violates a quantity to be used as an absolute because it has different speeds when measured from different reference points which are themselves relative. If Einstein’s theory was correct, which was experimentally verified later, the concept of ether can never be a physical law. The luminous ether plays no part whatsoever in modern physics. The ether is a reality that failed.

Luminous Ether

Time: The reason behind the Gravitational Force

5

General Relativity can be said to be one of the greatest theories. The interplay of space and time that gives rise to the fundamental force of the universe is geometrically the most elegant theory out there. It is intricately complicated but beautiful in the sense that the underlying principle behind the theory is simple and magnificent. But the math of the theory is so complex that in 1919 when someone asked Sir Arthur Eddington whether it was true that only three people in the world understood the theory of general relativity, he allegedly replied: ‘Who’s the third?” This theory is, thus, heavily misunderstood, with almost all the analogies offered in order to understand this theory mistaken in some or the other way.

The Flaw in the View

The very first illustration someone shows when explaining gravity in the sense of general relativity is that of a massive body placed on a stretched rubber sheet.

rubber sheet analogy
The rubber sheet illustration if the spacetime curvature
[https://www.youtube.com/watch?v=MTY1Kje0yLg]

The massive body causes a depression in the rubber sheet, and this dip causes other bodies to move towards it. But in reality, this view is heavily flawed just like any other illustrations.
First of all the two-dimensional sheet curves downwards, i.e. into a third dimension. So for this view to be true, there must be a fifth dimension, apart from the four spacetime dimensions, into which the spacetime must curve which is not the case in reality, where the four dimensions of spacetime curve within themselves and not into some other dimension.

The sheet here curves only because an external force (here Earth’s gravity) pulls down the mass. In a vacuum, the sheet would not have curved at all. This view implies that there must be an external gravity-like force that pulls the mass into some other dimension causing the spacetime to curve, which again is entirely a misinterpretation. 

This curving in two dimensions of space and zero dimensions of time actually looks like this, 

the actual curving of space time
Actual 2D space curvature according to general relativity. There is no dip, no downwards pull and no external force acting on it. [Image: West Texas A&M University]

which is completely different from the rubber sheet analogy. Notice that there is no dip or external force acting in the downwards direction, but only a curve in the spacetime grid. A body moving with some velocity, as it gets close to the gravitating body, follows a straight path (denoted by the space-grids) in curved space. This looks to us as if the object is attracted to the gravitating mass. 

It’s all fine up to now, but here we have a problem. A body, initially at rest at some distance r from the gravitating object, has no reason to get attracted to it. It is fine sitting just there, even if the spacetime is curved, i.e. there should be no force on a stationary object. This should mean that the force of gravity should have been a velocity-dependent force, which is not quite the case in reality.

So where are we flawed?

The answer is simple. 

Time. 

We ignored time as a dimension and did not consider its curvature at all. Gravity without time is incomplete. General relativity without time is incompatible. Gravity can be only explained with a curvature in “spacetime” and not “space” only. How?

Consider a lift in empty space. Einstein’s special relativity states that there is no way a person inside that lift could know if he were moving or not. But what happens if the lift was to be accelerated? The equivalence principle states that an accelerated movement is equivalent to gravity, so the person inside the accelerated lift still has no way to find out if he is moving or is gravity acting on the lift, thus ensuring that the condition is satisfied.

What if we set up things in a way to violate the condition? Could we enable the person inside the lift to tell if it is gravity or acceleration he is experiencing? Consider a photon emitter placed on the top of the lift and a detector at the bottom. Both are equipped with clocks that measure the time interval between emitting of the two photons and between the detection of two photons. Since the lift is accelerating, its velocity keeps on increasing and hence the time interval between the emitting of two photons and the detection of the same two photons is not the same.

The accelerated lift in free space with a photon clock.
[Video: Physics Videos by Eugene Khutoryansky]

If there was a one-second gap between the two photons when emitted since the second photon has to travel a lesser distance than the first photon, the interval between the detection of the two photons is lesser than one second. To the person inside the lift, it would look as if the clock on the bottom is running slower than the clock on the top. To prevent the person from knowing if the lift was accelerated or if it was gravity, the same thing should happen in a gravitation field. I.e Time should run slower the closer you are to a gravitating body. This exactly is what is called gravitational time dilation, and it is an inevitable consequence of the theory. 

Thus a massive body not only warps space but also sets up a time gradient, where time moves slower the closer to the body. 

time gradient
The time gradient around the massive body, here Earth
[Image: PBS Space Time]

And this time gradient is exactly what causes the force of gravity. 

How is this Time Gradient responsible for the Force of Gravity?

Imagine a tea-cup floating freely in empty space, with two clocks attached – one on the top of the cup and one on the bottom. There is no gravitational field and so both the clocks are synchronised and ticking at the same rate, i.e both clocks are moving with the same speed through the time dimension. Introduce a gravitating body, it sets up a time gradient where time flows slower at points closer to the gravitating body and faster at points away from it. The situation can be better thought of as two boats riding downstream, with both of them connected by a rigid rod. Let the stream be flowing in t direction, and the perpendicular direction is one of the spatial dimensions. The two boats are the two clocks and initially, the boats are entirely moving along the stream, i.e. its four-velocity (velocity in the 4D spacetime) is entirely in the time direction and its spatial velocity is zero. Now if there was to be a velocity gradient in the stream, i.e. if the boat on the right were to be going faster than the one on the left, the rigid rod connecting them both would cause a torque to be exerted, turning both the boats to the left,

[Video: PBS Space Time]

i.e the motion which was initially only time-like has now obtained a spatial velocity just due to the fact that there is a velocity gradient in the time direction. 

The same is the situation with the tea-cup with the clocks. When in a gravitational field, the clock on the top moves faster that the clock on the bottom. Since the body is rigid, this causes the body to gain velocity in the spatial direction, and this is always towards the direction of a slower time. Every body can be considered as made up of many clocks, every subatomic particle is a clock in itself as it feels the flow of time. So every composite body under the time gradient experiences this force of gravity. The situation can be generalised even to point particles since these can be considered as two clocks separated by infinitesimal distance.

Thus when someone says that gravity is the curvature of spacetime, it implies that not only the curvature of space causes is important, but the time gradient set up by the Energy source also plays a prominent part in giving you a huge spatial velocity for a small price in temporal (time) velocity, the slowing down of time. Gravity is the force that arises as a result of a body’s tendency to move towards regions with slower time in the presence of a time gradient.   

  

[Comics: XKCD]

Breakthrough in Wearable Devices: Your Body Converted to Battery!

“Off! Ugh! The summer this time is too hot man.”

“I need to charge my smart-watch soon”.

Can these two phrases be relatable? Can they lead to some interesting innovative tech?

Well! Science has always been interesting and so is the field of wearable devices these days. A lot of research is going on each day in this area and lot of technology is developing everyday. Let me relate those two phrases first…

“Your body heat can be converted to electricity to charge your smart-watch”.

This sounds fictional. Fiction to reality is science and researchers at University of Colorado have developed such a tech, which can transform your body into a battery!

Converting fiction to reality: Wearable Devices for the future
Converting fiction to reality: Wearable Devices for the future
(Source: Nanowerk)

Overview of the Tech: Future of Wearable Devices

This is device of the future. Wear it like a ring or as a bracelet or even a whole t-shirt or any other wearable devices stuff which makes contact with skin. The technology gets into a person’s natural heat, engaging Thermoelectric Generators (TEGs) to convert body’s heat into electricity. Thus, we can say that these are some “battery-less” wearable electronics…this is just too cool!

And sustainability is not a problem at all! The device proposed here self-heals itself when damaged and fully recyclable. This makes them cleaner and greener alternative from conventional systems.

Let us first understand what is the core principle involved in this technology. Most of the wearable devices use a component called Thermoelectric Generators (TEGs). What are they?

Thermoelectric Generators: The backbone of wearable devices
Thermoelectric Generators: The backbone of wearable devices
(Source: SlideShare)

Well, these are small, compact and most useful thing fitted on the tech. They convert heat flux (simply called temperature differences) directly into electrical energy. They are made up of materials which have high electrical conductivity and low thermal conductivity. Nanotechnology plays a major role while fabricating them into wearable devices, as they need to be small yet efficient. Without getting much deeper into this particular component, consider that this is the most promising energy sources in the field of wearable electronics and in the growing field of Internet of Things (IoT).

With such goodies, there are some things which make them less feasible. Rigidity, brittleness and inflexibility are some factors which restrict their use. However, now for the first time these things have been overcome, and a self-healable and recyclable TEG is introduced; with super flexibility and superior stretchability. Using advanced thermoelectric films, fibers and/or organic materials, the whole component was synthesized with record-high open-circuit voltage (1 V/cm2) amongst all other TEGs.

Getting into the skin

For knowledge and information enthusiasts, let me give specific details of this wearable devices innovation. The TEG system is configured by integrating high-performance modular thermoelectric chips, dynamic covalent thermoset polyimine as substrate and encapsulation, and flowable liquid metal as electrical wiring through a novel mechanical architecture design of “soft motherboard-rigid plugin modules” (SOM-RIPs). These all work together to give TEG system Lego-like reconfigurability which allows its users to adjust and customize the energy-harvesting device as per the thermal and mechanical conditions.

The future of wearable devices. 
Flexibility, configurability and self-healing property.
The future of wearable devices.
Flexibility, configurability and self-healing property.
(Source: ScienceAdvances)

Thermoelectric chips are fabricated by depositing thin film Bismuth (Bi) and Antimony (Sb) chalcogenides onto polyimide films using thermal evaporators; which serve as n-type legs and p-type legs. To improve the crystallinity and performance of the chip, the thermoelectric films were treated at 320oC for 26 min in argon atmosphere. And lastly, Au-Ge electrodes were deposited using a thermal evaporator to form a connection between n and p type legs.

Sorry about the previous paragraphs, got to cringey. But the final product of all these is as soothing as a diamond ring, I must say. It appears a beautiful cross between a bracelet and a miniature computer motherboard. So, next time when you go for a stroll, or a tiring jog; avoid wearing fancy bracelets and costly rings. Wear the future. Wear the wearable devices’ innovation. A perfect device for all (rather all hard workers) which converts body heat into electrical energy.

Novel Wearable Electronics: Excellent mechanical properties
Novel Wearable Electronics: Excellent mechanical properties (Source: University of Colorado Boulder)

This all sounds big, while the real chemistry is small and minute. The bond exchange reactions happening within the dynamic covalent thermoset polyimine network and flowable liquid-metal electrical wiring – makes the whole wearable electronics technology unique. It becomes self-healable, recyclable and Lego-like configurable. With excellent mechanical properties, it can be stretched and worn on finger, while functional.

The Positives of this wearable devices

Consider you have gone for a jog or a brisk walk. Exercise makes your body to heat up and that heat will radiate out to the cool air around you; means becomes of no use. Now, this wearable device captures that heat, preventing it from dissipating and TEGs utilize it to generate electricity. Thus, your body just got converted into a battery (rather pseudo battery).

Using this device, an average person taking a brisk walk can generate about 5 volts of electricity, that too from just a size of an ordinary sports wristband. And believe me, this much amount of electricity is much more than what many watch batteries can muster.

The Most interesting part comes now!

While wearing it, if the device breaks down or tears up, just pinch together the broken ends of it and they will seal back up in no time. Okay, so you must be thinking, “what if I am fed up wearing the wearable devices?”

No worries, researchers got answer to this too. Just put it in a special liquid solution. This will separate out the electronic components, and dissolve the polyimine base. And thats it! Now each and every component can be reused potentially. With such fascinating things, the wearable devices like this can become fully commercial in next 5 to 10 years…or may be sooner! as reported by one of the researchers.

Einsteinium: Exploring the Edge of the Periodic Table

1

Credit: © Intothelight Photo / stock.adobe.com

The arrival of the nuclear age has given mankind the expertise to synthesize new elements of higher molecular weight and number. While these human-made elements are of interesting research inquisitiveness, they are short-lived and radioactive which indicates limited market interest from a severance standpoint at this time.

Since Einsteinium was discovered in 1952 at the Berkeley National Laboratory from the wreckage of the first hydrogen bomb, scientists have worked on very few experiments with it because it is so difficult to build and is radioactive. A research group of Berkeley Lab chemists has defeated these barriers to publish the first study characterizing some of its attributes, unlocking the door to a more immeasurable knowledge of the remaining transuranic elements of the actinide series.

The study was published in the journal ‘Nature’ and received contributions from Scholars, researchers, postdoctoral fellows of Berkley Lab, Los Alamos National Laboratory and Georgetown University. With less than 250 nanograms of the element, the team estimated the first-ever einsteinium bond distance, a fundamental characteristic of an element’s interactions with different atoms and molecules.

One of the Professors from UC Berkley claims that there’s not much known about Einsteinium, however, it was an extraordinary accomplishment that they were able to work with such a small amount of substance and contribute to inorganic chemistry. Also, such findings are vital because the more we investigate the chemical behaviour of elements, particularly the transuranic ones, the more we can implement this understanding for the advancement of new materials or new technologies, and certainly not just with Einsteinium, but the rest of the elements from actinide series too.

Einsteinium is Short-lived and Hard to Synthesize

Image for High Flux Isotope Reactor required to isolate einsteinium.
High Flux Isotope reactor and more such instruments are to be deployed in order to isolate a significant of sample of radioactive elements such as Eisteinium.
(Image credits: COSMOL)

The experimental equipment and facilities used for experiments were not available decades ago when einsteinium was first discovered. The Molecular Foundry at Berkley Lab and the labs at SLAC National Accelerator Laboratory have the facilities for luminescence spectroscopy and X-ray absorption experiments with prove to be the pathfinders in the current research.

Among all steps of this experiment, the most important was to transform the sample into form, which could be handled in the experiments, while carefully handling the radioactive disintegrating nature of the element Einsteinium.

The material was built at Oak Ridge National Laboratory‘s High Flux Isotope Reactor, one of barely a few sites in the world that is competent in making einsteinium, which requires bombarding curium targets with neutrons to initiate a continued chain of nuclear reactions. The first predicament they confronted was that the sample was poisoned with a significant quantity of californium, as making absolute einsteinium in a suitable amount is exceptionally challenging.

So they had to discard their initial plan to use X-ray crystallography — which is acknowledged as the gold standard for collecting structural information on extremely radioactive molecules but demands an absolute sample of metal — and alternatively developed a new way to make samples and support element-specific research methods. Researchers at Los Alamos contributed crucial aid in this step by designing a sample holder uniquely adapted to the difficulties inherent to einsteinium.

Then, battling with radioactive decay was an added challenge. The Berkeley Lab team carried their experiments with einsteinium-254, one of the more stable isotopes of the element. It has a half-life of 276 days, which is the duration for half of the material to fade. Although the group was able to manage many of the experiments before the coronavirus pandemic, they had strategies for follow-up experiments that got intervened, thanks to pandemic-related closedowns. By the time they were able to get back into their lab last summer, most of the sample was faded.

Bond Distance and Chemistry of Einsteinium

Image result for einsteinium images
Einsteinium was found in the remains of hydrogen bomb explosion meant for test run in 1952, since then it has been a topic of interest among nuclear chemists to investigate its properties. (image credits: thegaudian)

Pandemic had hit hard the research environment all over the world but still, the researchers managed to work efficiently and measure a bond distance with einsteinium and also learned some physical chemistry behavior that was different from what would be expected from rest of the actinide series.

Determining the bond distance may not appear impressive, but it’s the first thing needed to know regarding how a metal binds to other molecules. And answer more queries like what kind of chemical nature is this element going to have with other atoms and molecules.

Once scientists have this notion of the atomic arrangement of a molecule that combines einsteinium, they can try to find fascinating chemical properties and enhance the perception of periodic trends. By investigating this piece of data, we achieve a better and more comprehensive understanding of how the whole actinide series acts. Actinide series holds a very important position in applications as it has elements or isotopes that are valuable for nuclear power production or radiopharmaceuticals.

Tantalizingly, this research also extends the opportunity of exploring what is beyond the edge of the periodic table, and possibly discovering a new element. With more such findings we have found ourselves in a position to understand a little better what happens toward the end of the periodic table, and the next thing is, we could also envision an einsteinium aim for exploring new elements. Looking back to the history of discovery of elements, heavier metals, in particular, suggests that new elements were discovered while isolating and investigating last elements.

Scientists inform each time any such discovery is made, what we know about the chemical and physical nature of known elements is limited and still their’s an ocean of properties we need to know.

Are we a simulation?

0

Do you think the characters in the games we play have some kind of consciousness? Is it possible that when you play as them, they might be wondering what is it that constrains them and drives them forward towards the objective? They might be really staggered by the fact that some random act of theirs, done unconsciously without any control, leads to some valuable treasure or loads of money, or even their death, while it is actually you who makes the decisions and drive them forwards. They might be wondering if it is fate that some manage to succeed, and some get killed, while it actually depends on the experience of the player playing the character.
Now, imagine one such simulation, except that it is more complicated. Not complicated for you to play, but complicated at the rendering level. Each and every stuff that takes place in the game is modelled and rendered from only a set of a few basic rules, i.e. from the atom-atom interaction. The simulation started off with defining a calculated amount of matter and energy, all balanced to just form the first elemental hydrogen and to go on with the reactions to form stars and stellar systems. The developers were not left with much resources after programming all this, so they decided to continue the simulation on only a randomly selected planet, and again, they only let one species of life to evolve and become so-called intelligent. It is possible that these species in the simulation might develop a consciousness (or maybe programmed to have it) and one day, stare at the infinite universe and wonder if they were alone, or why did the universe favour their evolution, or even why the universe was formed in the first place. They will do this all while being curtained from the fact that the world they live in was accurately programmed by us to be in that exact form.
Today’s computer universe is so powerful that rendering a hyperrealistic 3D world is a piece of cake to us.

Assassin's creed odyssey
Cyberpunk 2077

With games like Assassin’s Creed Odyssey and Cyberpunk 2077, it is now hard to distinguish between the real world and the simulation world.

If the development pace is maintained, it might not be long before we could possibly simulate the entire universe. So… It might also be true that we are beings inside such a simulation, living the lives constrained by the computer codes written by an “Alien” civilisation in an entirely different universe. 

The simulation Trilemma

Rene Descartes once said, “It is possible that I am dreaming right now and that all of my perceptions are false.” The idea that the universe we live in is a simulation is not a recent one but it came to limelight in 2003, when a philosopher Nick Bostrom proposed a trilemma that he called “the simulation argument”. Bostrom’s trilemma argues that one of three unlikely-seeming propositions is almost certainly true:

  1. “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
  2. “The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero”, or
  3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”

If you did not understand the above points then don’t worry, simplified, the trilemma points out that any civilization which was once at the same level as humans, but later developed further technologically, would have enormous computing power. For such a civilisation, running such a complicated simulation would not be a problem at all. Bostrom goes on to claim that, if the third one of the above three is to be true, then it is very much probable that we might be living in a simulation.

Are we a simulation?
We might be a simulation, stuck in the eternity of the computer world and unable to comprehend our creator’s will.
[Image: www.snopes.com]

In a podcast with Joe Rogan, Elon Musk said “If you assume any rate of improvement at all, games will eventually be indistinguishable from reality” before concluding “that it’s most likely we’re in a simulation.”

The famous astrophysicist Neil Degrasse Tyson also argued in favour of the theory. He said in an NBC News interview that the hypothesis is correct giving “better than 50-50 odds” and also told, “I wish I could summon a strong argument against it, but I can find none.” 

Why this might be actually true?

There are so many reasons why this theory might be true, and let me talk about a few of those.
First of all, there is the Fermi paradox, there is no reason for us to be alone in this vast universe, with an infinite number of possible human-like civilizations. But still, we have not yet made an extraterrestrial contact, and the only reason we can think of is some mere hypothesis which is unprovable.
Then comes the so-called “glitches in the matrix”, the Mandela effect and the deja-vu. Almost everyone seems to remember seeing Nelson Mandela’s death on TV in the 1980s, while there was no such airing in 1980 and he actually passed away in 2013. There are many other examples of such mass false memories. Though many of us clearly remember black detailing on the Pikachu’s tail, in reality, it’s just yellow. Most of us also don’t seem to remember the death of the legend Neil Armstrong even if it happened recently in 2012. Or for star wars fans, Dark Vader never said his famous dialogue “Luke, I’m your father”, it was just “No, I’m your father”, and C-3PO isn’t all gold but has a silver leg. Such a mass false memory is today called the Mandela effect, and there is no explanation for this.
And the same goes for the feeling that you have already experienced something at some point in time, while it is the first time you are actually experiencing it, i.e. the unexplained deja-vu too.
In 2017, a group of researchers at the University of Washington showed that they could embed malicious computer code into physical strands of DNA. While synthesising DNA, they coded it in such a way that once it is analysed by a computer, the output from the analysis would take over the system and could take command of it. Their aim was to show that computers working in gene sequencing were vulnerable to attack. But they may also have unconsciously revealed that what we perceive to be biological reality was, in fact, a computer code.

DNA code
Is DNA a set of computer codes?

What’s more bizarre is that a theoretical physicist James Gates claims he has identified what appears to be actual computer code embedded in the equations of string theory that describe the fundamental particles of our universe. He says he found “error-correcting codes — they’re what make [web] browsers work. So why were they in the equation I was studying about quarks and electrons and supersymmetry?”
And the list is non-exhaustive, it is never-ending and you can always find newer arguments showing that the universe is indeed a simulation.

Is it actually true?

This entire conjecture is based on the assumption that consciousness can be modelled into a computer program or something that a computer programmed entity can develop. But we have zero understanding of consciousness right now, so it’s impossible to disprove this speculation. Some methods were specified to test out this hypothesis. Any computer would have finite computational power, and hence there must be a tiny “pixel” of spacetime that the computer can render. Even though unproved, the Planck scale could be this pixel in our universe, but as said before, no proof for or against this is available. One more argument states that any computer program is subjected to errors in the long run, so dynamic corrections need to be made in order for the program to keep working, and some might argue that the seeming shift in the values of the universal constants like the Hubble’s constant could be this correction, but again, no proofs. 

What is consciousness?
The consciousness, what exactly is it?

Our god might be not someone who is omnipotent but might be some teenage hacker in some other universe. If this theory is true, nothing would actually change, except for the fact that we need to sit with fingers crossed, hoping that there is no bug that crashes the entire program.
In the words of philosopher Preston Greene, it may be best not to find out if we’re living in a simulation since, if it were found to be true, such knowing may end the simulation.  

MI Air Charge Technology: An Astoundingly Insane Idea or An Intriguing Reality?

The progressive development of man is vitally dependent on invention.

Nikola Tesla

The most badass inventor of his time had truly great vision in his eyes. Of all his visionary and rather weird ideas, ‘Air Charge’ or the concept of true wireless charging was the most contemplated yet feasible thought which he etched in papers.

A ‘World of Wires’ was a phrase coined in 1877 when telephones ☎ were stereotypically viewed as a communication device with long strands of wires. But since then, tech has picked-up a rapid pace and we have developed wireless telephones, wireless connectivity, wireless computers; But for the very first time Xiaomi has been able to showcase “True Wireless Charging.” 😲 Goodbye to those frustrated feelings you experience when you forgot to turn on your charger switch. 👋

Air Charge will solve all the problems of wired charging.
Air charge is the ultimate solution to solve all the problems of wired charging. [Source- jokofy.com]

Goodbye dear cable, Wait! Too soon, isn’t it. 🤔

The Teardown of MI’s Air Charge Technology

Xiaomi claims that their technology will be able to charge smartwatches, bracelets, and most importantly phones. As per the company’s claims, the future of devices will be built on true wireless design.

But is this a marketing move by Xiaomi or this tech actually work?! To understand why other companies like UBeam, Wi-Charge, Energous, etc… failed to roll the Air Charge tech on the assembly lines, let’s understand the underlying concept here.

Xiaomi's self developed Air charge tech can support charging upto 5 Watts.
Xiaomi’s self developed Air charge tech can support charging upto 5 Watts. [Source- blog.mi.com]

The core technology lies in spatial positioning and energy channeling. The system has integrated 5-phase interference antennas that detect and locate the position of mobile within a specific radius. A Phased controlled array consisting of 144 antennas transmits millimeter-wide waves straight through Beamforming.

An example of a phased array patch. [Source- dlr.de]

The spacing between antenna elements is generally more or less around half the wavelength.

Now the relationship between frequency and wavelength is:

λ = 300/f

Also, note that the length of an antenna depends upon the frequency of operation. Hence,

L = 492/f

So at 5 GHz or 5000 MHz, a half a wavelength is,

λ/2 = 150/5000 = 0.03 m or 1.18 inches

At 28 GHz, a half wavelength is exactly,

λ/2 = 150/28000 = 0.0053 m or 0.21 inches

Many of the 5G spectrum in development use the same frequency of operation as Air Charge, hence the name Millimeter waves.

At these frequencies, large arrays can be packed in a small place with the size of the antenna becoming so small that it can be integrated onto a semiconductor chip.🤯 The Phased array is important for Gain incrementation, Directivity, and Interference minimization. The waves are transmitted to the mobile device via beamforming.

What the heck is Beamforming?

The word reminds me of a scene of Infinity war where Iron man focuses all his sonic thrusters to hit one of the Thanos’ children.😂 But imagination is still far from reality, beamforming has not been applied thoroughly on acoustics yet, wait, I am going off track here. The main focus here is on electromagnetic waves.

Beamforming in simple terms is a technique used to transmit wireless signal towards a specific receiver than having the signal spread in all directions like a broadcast antenna. All this works because of one of the most famous experiments performed by a physicist Young known as Interference.

Beamforming uses science of electromagnetic interference to make Wi-Fi connections more precise. [Source- networkworld.com]

A single antenna broadcasts signal in all directions much like a ripple that spreads across still water. But a phased array has many antennas simultaneously pointing in the same directions. Hence, using the process of interference every antenna sends the same frequency of the signal at slight intervals from one another. The result is constructive interference at some point which strengthens the network while destructive at other points rendering signals at some places undetectable.

By focusing the signal where the actual receiver is, it can deliver a high-quality superior network without the need for broadcast power.

Beamforming techniques use electromagnetic interference to enhance signal.
Beamforming techniques use electromagnetic interference to enhance the signal. [Source- commons.wikimedia.org]

On the mobile end, a miniaturized antenna array with a built-in ‘Beacon array’ and ‘Receiving array’ is attached. The Beacon array as the term signifies broadcasts location information. Receiving Array composed of 14 antennas convert the millimeter-wide wavelengths through a rectifier circuit into electric energy turning the sci-fi experience into reality.

!4 antennas on mobile devices end uses RF circuit to convert signals into electric charge.
14 antennas on mobile devices end uses RF circuit to convert signals into electric charge. [Source- blog.mi.com]

Air charge: A Technological Innovation or Marketing stunt

As of now, Xiaomi’s charging tech delivers 5 Watts of power at a steady rate for a single device within a radius of several meters. Apart from this, you can charge multiple devices at the same power and physical obstacles cannot even reduce the efficiency.

Xiaomi’s display of it’s Air charge technology.

Till now, these kinds of Sci-fi tech intrigued me but I wanted to know what the critics were saying. As it seems Action is loud, but critics tend to speak louder. Why the above companies I mentioned failed, because of two major reasons.

Firstly, it’s always Efficiency. After seeing a few videos and reading some papers (Not newspaper!😆), I concluded that under ideal conditions to deliver 5 W of power you need at least 500-1000 Watts of input power. That’s around 5-10 % efficiency even in ideal conditions. You are probably wasting so much energy as heat that your room temperature would rise by 0.5 or 1°C (just kidding, though it can happen!😮).

Though the speaker exaggerates a bit here the info provided is technically sound.

Secondly, not yet feasible or plausible. Even if the tech works, it needs a certain degree of improvement before hitting the market, or else the product would crash the wall itself. Another factor to consider here is that it’s not universal or currently compatible. For the tech to work, you need an array at the device end too. And needless to say, it won’t work with your current-gen mobiles or even non-Xiaomi devices. Costs always seem to be a hidden factor at play in these kinds of technologies. Unless and until the cost is not revealed assuming anything would be like ignoring air resistance in a real-world physics problem.

Hopefully, this time a big corporation has invested heavily in this and Motorola too is working on a remote charging solution. Seeing the big players at play I don’t think the other players are anywhere behind. Lastly, what was once a fragment of the imagination of Nikola Tesla’s mind is now being built into reality.

Dreams are Nothing but a wild conceit; An Idle Fantasy; A Visionary Scheme.

RDX

Project 1851: Winning the Sustainability Race

3

Sustainability place – Home. A place where every expedition starts and a place where you want to be at the end of every journey. Lavish entrance, huge car porch, wooden doors, spacious interiors and stuffed with ultramodern amenities; these are the ingredients which define a 21st century homes. Some say it is need, some say passion and some show-off. Well! This is the way we all dream about…

Hold on! Community residing at Chester Springs, Pennsylvania has their own dreams to fulfil, have their own vision to look for and have their own way of thinking to change the world. So, let’s get down to the streets of Pennsylvania in search for an amazing sustainability project going on out there!

Very simple looking house. Future of sustainability and smarter housing
Very simple looking house.
Future of sustainability and smarter housing.

Why the name “Project 1851”

You must be thinking since beginning of the article, why such specific name assigned to the project? Interestingly, answer lies in the sands of time. The largest flood in American history was witnessed by southern states when Mississippi flooded in 1851. In the very same year, The Foucault Pendulum, which was the first device showing the earth’s rotation was introduced in 1851. And co-incidentally the site where the project is developing has the plot number ‘1851’. So, now it is obvious to you all that why particularly ‘1851’…

What is “Project 1851”

Global Warming. This is the term which our ears have heard since past decade and still can’t do anything solid about it. Apart from this, there were some wild natural calamities which took down the precious elements of mother nature; forests. At the same time there are some biological calamities which are taking down the humankind; COVID-19. Unfortunately, the reason behind both is we humans and there is a serious need to take steps which can reduce, rather revers, our deeds and we move towards sustainability.

Project 1851: In the lap of mother nature 
Sustainability
Project 1851: In the lap of mother nature
(Source: Project 1851)

Project 1851 is a one stop solution to this. Sustainability is in the core of its planning and technology is the key empowering it further. It is the new standard of building; new space in the lap of mother nature and a new tomorrow for helping to reverse global warming considering its long-term impact to coming generations.

The Sustainability paths. The planning.

As Einstein has quoted, “You can never solve a problem on the level on which it was created”. We need to think beyond. This is what Project 1851 defines. A community was built with sustainability enthusiasts and people happy to work for. They collaborated, took better new decisions and Project was planned.

The main Cabin 
sustainability
The main Cabin
(Source: West Chester University)

While world thinks that solar and wind are the future of sustainability, this community had a whole new perspective. They were thinking that what if they build an efficient mechanical system in the first place and use all available passive design approaches. On the basis of this thoughts, they considered future adaptability to make a great building envelope, which consumes least amount of solar possible.

This turned out to be a robust idea and innovative approach. Project 1851 portrays an eco-friendly yet tech-rich luxury cabin-stay. It will be a unique icon of environmental stewardship and high-performance building designs.

Details of the Project 1851 to attain sustainability

As a model of environmentally sensitive and sustainable design, Project 1851 encourages others to build lightly with the land, leaving traces only worth preserving —for family, friends and all. With plot size of 1.1 acre and possible completion date of July 13th 2021; this is the project which will change the future of green and sustainable housing and also of the smart homes at the same time.

A perfect place to spend your time at
(Source: core.ac.uk)

As a flood prone area, necessary measures are also taken to keep the entire property safe and sound. The main cabin, gym and greenhouse will be on an elevated stilt to be well above the flood level. This makes 97% of the property in 100-year flood plane.

This project will offer a template of projects that haven’t been developed yet – perhaps even yours – because this will provide a roadmap for assembling a carbon – positive dwelling. Wait! Did I just mention carbon – positive?

Yes! While others sustainability projects plan for carbon – neutral; this is far ahead of them; winning the sustainability race pretty soon.

Proposed site map and plans:

Floor Plan of main cabin  sustainability
Floor Plan of main cabin
(Source: core.ac.uk)
Site map of 1.1 acre Project 1851  Sustainability
Site map of 1.1 acre Project 1851
(Source: core.ac.uk)

Extraordinary and out of the box sustainability

These are some of the cool features which make the project unique and out of ordinary:

Eco-Gym

This is the most resourceful thing possible. Fitness, sustainability and fun all at once. Keep yourselves working out. The energy you spent gets collected, regenerates and gets utilised to power your own house. Well, the house is smart and so fun inside it is obvious…

Power Positive Home

Empowered with solar glass roof and a desirable off-grid property makes this place a unique location. While this dwelling produces no carbon footprint, which is obvious, it rather generates more power which can be stored potentially. This is state-of-the-art technology and first of its kind sustainable housing.

Key Sustainability Producers

Water –Rainwater collection, purification (carry in back-up, well and pump back-up)

Waste –Cyclone/pulverization, quick drying, bagging (pump and dump back-up)

The icynene spray foam and recycled alpaca fur for insulation

The use of geothermal heating + cooling

Tankless water heater

Goals

LEED Platinum Certified Home

Leadership in Energy and Environmental Design is the world’s most widely recognized green building certification

EnergyStar Certified New Home

The blue Energy Star Label on a new home means it was designed and built to Energy Star’s rigorous requirements

Imagination Build the goals, Dreams fuel them and perseverance fulfil them.

OSD

Water Scarcity: Unconventional Techniques to restore human well-being

Water scarcity is among the top five global risks affecting people’s wellbeing. In water-scarce regions, the condition is grim. Typical sources like snowfall, precipitation, rivers and easily accessible groundwater are being hit by climate change, and supplies are falling as the requirement expands.

In these countries, water is a decisive challenge to sustainable development and an inherent cause of social turmoil and ambivalence. Water scarcity also strikes traditional periodic human migration routes and, together with other water vulnerability factors, could reshape migration models.

Water-scarce countries need a requisite change in planning and management. Scientists from around the world are looking at how to do this, through the productive exploitation of unconventional water resources. From Earth’s seabed to its higher atmosphere, we have a variety of water reserves that could be tapped. But making the most of these claims an assorted range of technological mediation and innovations.

Trapping Fog : :

Trapping fog to end water scarcity

Water embedded in fog is progressively seen as the origin of drinking water in arid areas where fog is severe and happens regularly. Fog can be collected using a vertical trap that hinders the droplet stream. This water then flows down into a water collection, storage and distribution system.

Distinct types of curtain materials can be used in fog receivers, like aluminum, plastic, plexiglass and alloy. The success of a method like this depends on the geology and topography, which need to be favorable to optimal fog capture. But this could work in dry upland and seaside precincts.

With the vital obligation of regional communities and professional assistance from local organizations, fog water harvesting is a low-maintenance option and a green technology to provide drinking water. Fog water accumulation plans have been executed in different parts of the globe, including Chile, Eritrea, Israel and Oman.

Under the right circumstances, rain enhancement through cloud seeding has the potential to improve the volume of water collection from the air. This technology includes scattering small particles into clouds or in their proximity. These particles act as an offset spot for raindrops or ice crystals, supporting their development. In turn, this makes it more apt to shower or snow.

Employment of cloud seeding technology in different countries has shown, rain can be increased by up to 20% of the annual pattern depending on the convenient cloud sources and classes, cloud water content and core temperature. As barely up to 10% of the total cloud water content is discharged to the earth as precipitation, there is a huge potential for rain enhancement technologies to improve rainfall in areas affected by areas. This could prove to be a major tool towards ending water scarcity issues in many parts of the globe.

Holding evaporation. Ending water scarcity..

Holding evaporation. Ending water scarcity..

As dry areas receive small amounts of rainfall, micro-catchment rainwater harvesting may help in capturing rainwater on the ground, where it would otherwise evaporate. There are two major standards of micro-catchment rainwater harvesting methods to end water scarcity. One is water harvesting via rooftop systems where water is collected and stored in containers or similar devices. 

The second is water harvesting for agriculture, which comprises collecting the rainwater that flows off a catchment area in a small reservoir or the root place of a cultivated field. The catchment exterior may be consistent or treated with a substance that hinders the soil from absorbing water, especially in areas with sandy soils. Because of the shifting nature of runoff, it is important to collect the highest amount of rainwater during the moist season so that water scarcity can be solved.

Desalinating seawater

Desalinating water to overcome water scarcity

The process of desalination separates salt from seawater or saline groundwater to make them drinkable. This enables us to collect water beyond what is obtainable from the water cycle, rendering a climate independent and constant amount of high-quality water. Seawater desalination has been developing faster because of strides in membrane technology and material science. These improvements are projected to cause a meaningful decrease in production costs by 2030.

More sites are required to grow reliant on desalinated water because of its declining expenses and the rising costs of traditional water supplies. While at present desalination produces approximately 10% of the municipal water supply of city coastal stations worldwide, by the year 2030 this is expected to reach 25% and thus strike water scarcity issues to a great extent.

Iceberg Harvesting: Icy way to overcome water scarcity

Iceberg Harvesting: Icy way to overcome water scarcity

Dragging an iceberg from one of the arctic ice caps to a water scarcity facing country may not appear like a sensible solution to water deficits, but scientists, scholars and lawmakers are contemplating iceberg harvesting as a potential freshwater source.

Driving an iceberg past the ocean is technically feasible, based on a theoretical four-part process. It would need establishing a suitable source and supply, measuring the necessary towing power specifications, carefully predicting melting in transportation, and evaluating the economic practicability of the entire endeavor. Countries like the United Arab Emirates and South Africa are viewing iceberg towing as an alternative to fine gaps in their water requirement and supply.

Water and climate change are interconnected, so climate change raises the reasonableness of extreme water scarcity in dry areas. Harnessing the potential of unconventional water resources can aid increase the compliance of water-scarce populations against climate change, while expanding water supply resources.

We need to recognize and encourage functional systems of unconventional water resources that are environmentally functional, economically viable, and assist the completion of water related sustainable development, in the 2030 Sustainable Development Agenda and beyond.

This issue of water scarcity shows that how helpless and impotent we are in front of mother nature – even after such great years of evolution…

Water is also the strangest liquid on earth. To get more of such interesting content; Click Here.