Home Blog Page 5

QLED: The Brand New Quantum Torchbearer Of The Futuristic Tech

After an exhausting and hectic week of ‘Work from Home’, your mind is too busy planning the most awesome relaxing weekend you deserve! With Govt. guidelines echoing in your ears after hearing the same caller tune for the 17th time, you choose to follow them (Like you had any choice!😜). Rather than focusing on the fact, you decide to act. With a popcorn tub in one hand and a remote in the other, you are finally set to watch the new series your friends were talking about. A single click elevates the whole Living room into a different zone. A perfectly placed wallpaper on the Wall turns Vantablack and a 4-letter futuristic word: ‘QLED‘ appears on the screen with a tag-line- “Experience the Immersive.”

QLED TV will disrupt the whole display industry with its unique tech which can create billions of shades with precision. This video is just an example of the power of QLED. (PS: I have not been paid by LG to show its product here😝) [Source- Vimeo/thehavok]

But QLED’s been invented and is in the market, right?!🤨 But this is a whole new different level of QLED not some fancy 2000$ 8K TV but a marvel straight outta the Stark Industries’ lab. This is a Quantum LED TV.

Tearing Down An OLED

The Screen has become our “Window to The Other World” or the second set of eyes like my grandparents used to say. (or mock!😞) As simple as the design of this tech seems, the more complicated it becomes when you try to scale it down or quantamize it. QLED has been long sought to the very foundation of flexible electronics.

To oversimplify a bit of touch here. Let’s start with the most widely used panel of the market- LCD or Liquid Crystal Display. The name often carries a misconception with it that Liquid Crystal is not a plasma but a solution. The basic working of LCD consists of two polarizers and a layer of a nematic liquid crystal is applied to one of them and the whole system sandwiched between two electrodes. The source of light is a backlight generally a white light (with different shades of RGB). It is allowed to pass through a polarizing plane (horizontal), then through the liquid in which molecules align themselves to twist the light at 90° and the light passes at the other end of the polarizer (vertical).

The full assembly and working of LCD
Working and assembly of LCD showing image formation at the screen. [Source- Samsungdisplay]
How nematic liquid crystals twist light
This figure shows how nematic liquid crystals twist light when a voltage difference is applied. [Source- gifer]

The main principle of LCD is ‘light blocking’. When a current is applied the crystals untwist and the light passes straight and is blocked at the other end so the pixel seems to be dark. A red, green, and blue filter sits on top and converts incoming light into the specified color, these are called Sub-pixels. The millions of pixels filters and recombine the light to produce the right blend of shades on the screen. These are the very basics of working of LCD.

The detailed structure of LCD
This is a cross section of a typical LCD pixel. [Source- j-display]

Now, every tech has its cons, and LCD had some major issues in energy wastage, it couldn’t express true deep black levels and are rigid. A need for new technology that could replace LCD in the foreseeable future gave birth to OLED or Organic Light Emitting Diode Device.

How OLED work and why it is different from LCD
OLED uses emissive technology to produce images on screen unlike LCD which uses backlight. [Source- IEEE]

Unlike LCD, OLED does not use a backlight. It is an emissive technology in which sub-pixels themselves emit red, green, and blue light. The fundamental structure of OLED has an anode (transparent) attached to a substrate, and a conductive layer comprising of organic material is coated on it. Another organic material makes the emissive layer and a cathode sits on the top. When current is applied the anode is positive w.r.t to the cathode so es travels from cathode to anode. The conductive layer starts to provide anode with es while the emissive layer captures the flow of e and becomes negatively charged.

Structure of OLED with different layers with a glass/foil substrate. [Source- circuitstoday]
How an OLED works and the principle behind it.
This figure depicts the working of OLED and different layers of it. [Source- explainthatstuff]

With a conductive layer rich with holes (or short of e) and emissive layer rich in es, the holes being more mobile travel towards the emissive layer. The holes and es recombine due to electrostatic forces and light is produced in the emissive region. The light then passes through colored sub-pixels producing a sharp and crispy image on the screen. Adding more and more layers increases the efficiency of OLED. This tech gives an ultra-thin display, superb black levels, and blazingly fast refresh speeds.

OLED has the best black levels but QLED is soon gonna change the whole game.
OLED has interesting black levels but QLED has more optimized and superb blacks. [Source- gifs.com]

Is this is the end of the road? Na, otherwise why would I be writing this article. But OLED also had energy problems and manufacturing it is pretty expensive. That’s why the path to tackle the problem LED to the fusion of Quantum dots and OLED dubbed as ‘QLED’.

The Quantum Dot Future And QLED

But, what the heck is a quantum dot?!🤔

Quantum dots made in a solution.
The future of printing tech; Quantum dots made in a solution just like ink. [Source- quantumsolutions]

Typically a few nanometres in size, quantum dots are tiny semiconductors made of zinc selenide or cadmium selenide. However, its main property is its ability to convert short wavelengths (blue- 450 to 495 nm) to nearly any color in visible spectra. When a photon hits a quantum dot an e – hole pair is created. The pair recombines to emit a new photon-based upon the size of a quantum dot. Bigger emit longer wavelengths close to red (620-750 nm) while smaller ones emit shorter wavelengths closer to the violet end (380-450 nm). This type of tunability offers a wide range of scope to perform research and apply it in various fields.

Experiment on Quantum dots to produce lights.
When a light (photon source) is shone on the solution of Quantum dots, it produces light of a specific wavelength. [Source- gyfcat]
Quantum dot
Quantum dots emit wavelength according to their size. [Source- nanosysinc]

What do these dots have to do with the picture on your screen? Today, quantum dots are used to increase the efficiency of LCD by two-fold. Introducing dots into both LCD and OLED inspired two new enhanced panels- PE-QD TV and EE-QLED (or QD-LED).

Working of Photo Emissive QD TV
A cross section of Photo Emissive QD TV technology using backlight. [Source- IEEE]

The major difference between the two is the Source of light. In a Photo-emissive QD display, a blue LED backlight is used with red and green quantum dots. The blue sub-pixels are left transparent while red and green converts blue photons into subsequent colors. The light being pure requires no filtering increasing the efficiency of the display to 99%. Since quantum dots are produced in a solution, they can be cheaply and perfectly printed using inkjet and transfer printing. Finally, manufacturing tough Flexible electronics can be cost-effective, energy-saving as well as streamlined. (read more about it here)

Quantum dots and new QLED may pave the way for flexible electronics in near future.
Quantum dots and new QLED may pave the way for flexible electronics in near future. [Source- CNRS]

Photo-emissive QD TV is just an interim step for the display industry. The Quantum leap for the display industry is Electro-emissive OLED or QD TV. In this quantum dots are excited by electrons rather than photons from the backlight. These dots can produce the exact shade of color with absolutely no loss of light. Since there is no backlight, there’s no light leakage hence it can simulate a vantablack darkness. These QLED screens use less energy, cost, and are brighter than OLED with wide viewing angles too! Considering these panels are easy to fabricate and uses inorganic materials, QLEDs have the benefit of enjoying longer lifetimes than OLEDs.

Electro Emissive QLED technology
Electro Emissive QLED display cross section with electro-luminescent sub-pixels. [Source- IEEE]

Samsung has been on the run to produce the first-ever commercial QLED TV but it’s still in the early development phase, so can’t expect to see them in the market just yet. But with the global pandemic shutting us in our homes lying on a comfy sofa, TV became an integral element that kept us (or just me!) going through this lock-down. Let’s hope they develop the QLED TV at a fast-track pace so that we can enjoy another movie thanks to Quantum dots!

What is now Proved was once only Imagined!

William Blake

IoT – The Meandering Road Towards A ‘Smart’ Tomorrow

IoT
‘Beep Beep’ the alarm went off. “The next time it goes of I’ll kill it.” You grumble and hit the dismiss button. Pulling yourself out of the comfort of your warm and cosy bed with great struggle, you get into the already hot shower. Your favourite song that starts playing just as you put on the tap adds to the mood and brings you back to your senses. Just as you switch off the heater and get back to your room to dress up, an overwhelming aroma of freshly brewed coffee hits you like a tsunami. You rush down to the kitchen, have your already prepared toast and coffee and then leave your house. You get out of the house just as your car drives itself from the garage to your front door.

Wait, this is not some scene of any sci-fi Netflix show that I am going to recommend to you.
This is how the future looks most probably, all thanks to the IoT slowly gaining prominence in all domains of life.

What is IoT? How it makes such a future possible? 

IoT is the short for Internet of Things. It involves connecting any device to the internet, enabling it to send and receive data through an assigned Internet Protocol (IP). It turns any ‘dumb’ device into ‘smart’ allowing it to communicate with other IoT enabled devices and act and decide smartly.
A very simple example of an IoT enabled device would be ‘Smart light bulb’ – A bulb which is connected to the internet and can be customized and controlled remotely via a specific mobile application.

A smart bulb which can be customized with a mobile app.
[Credits: LIFX on Vimeo (full video available)]


The variety of IoT enabled systems ranges from ‘smart homes’ to medical devices that could detect signs of various diseases, it is left to one’s imagination and the sky is the limit.

How does it work?

The working of IoT is very simple.
Devices have sensors built in them which collect data from their surroundings. These devices share their data via the internet and it is analyzed. Depending on the examined data, a command is issued and communicated to the corresponding device again via the internet.

IoT architecture
The flowchart of an IoT device.
[Image: IoT Agenda]

Taking the very first example,
The moment you hit the dismiss button, your alarm clock relays the information that you are up. This issues a command to the heater to turn on for you to take a shower.
The knob of the shower, upon being put on, sends an instruction to the speaker to play the specific playlist.
Once you put off the heater after your shower, the heater relays the data and the toaster and the coffee maker in the kitchen gets a command to put themselves in work.
Walking out of the kitchen, the motion sensor installed in your door decides that you have left the kitchen, this instructs your garage door to open, and your self-driving car (a bold assumption though xD) to drive itself to door.

Notice how this was all possible only because each device you used relayed a signal informing of your actions and also received instructions on what to do, both using the internet. 

The future for IoT

Currently, the pace of development of these devices is so fast that we can safely assume that such a sci-fi future is not far away. IoT would prove to be hugely beneficial in all aspects of life and the economy. This technology is right now being developed by many huge companies, for various purposes.

For example, a company by the name Peak 8 Connected has created a technology called DOODLEBUG. This is basically a small device with sensors which you have to place under the soil. It monitors the water content in the deep root zone of the soil and provides with all sorts of data, enabling you to make intelligent decisions for irrigation.

Doodlebug sensor
irrigation stats

The Doodlebug technology that enables smart decision making thanks to IoT


Another company, the Propeller, offers management of Asthma-related conditions by just installing a sensor. This sensor, when installed on the inhaler, allows one to assess all the asthma attack related data and finds out what triggers it. It also keeps you connected with family and friends in case of any emergencies.

propeller sensor
asthma related stats

The Propeller technology that enables one to track asthma-related symptoms and help reduce asthma attacks, along with an improved medication.


These examples show that what IoT has to offer is beyond the limits of our imagination, ranging from a small farmer to a huge MNC.

Why do we need to worry?

With zillions of devices connected and controlled under a shared hub, the IoT poses a high-level privacy threat, especially when it comes to ‘smart houses’ and ‘smart offices’. Data security is the chief point of interest, where IoT plays the victim very well. Data is being collected by IoT enabled devices every moment, and this data can sometimes be very sensitive.

Consider the very first scenario, the data about all of your actions is being relayed constantly. This is very dangerous information if someone manages to get hold of it, and it is not difficult to obtain too. One could easily install some external hardware in one of your IoT enabled device that could provide backdoor access to all the data. With unencrypted, unsecured IoT, it could be your company’s sensitive information or your personal information, literally any data could be compromised.

Iot vulnerability, how can iot enabled devices compromise security
[Image: PubNub]

Thus security must be the most primary concern when you are to be handling soo many devices sending data in and out every minute. Encrypting could be a pivotal point in providing security, but again, no form of encryption can provide ultimate privacy, and any sort of security is gullible for breaches and hacks.

For any piece of technology, the primary concern should be about how much we should rely on it, and how much data you should share with it. The current generation is so obsessed with technology, and privacy has become a laughing stock in our life. We have given ‘smart’phones in the hands of ‘dumb’ people who readily permit a calculator app to make and manage phone calls.
So IoT looks like it appears to offer a dark and dangerous future ahead. But this is not necessarily the case. If security and privacy are taken seriously and measures are taken constructively, then IoT offers you the brightest future. But we ought to remember that this future is being offered on the knife tip. One wrong step of ignorance, you will find yourself cut so deep, a cut that shall prove fatal, one that shall never heal.

“It is only when they go wrong that machines remind you how powerful they are.”

Clive James

Space Research – Part 1: What is it & How is it helping us

We are living in a world where we have a huge list of problems that we humans face ranging from diseases to poverty, that 8.9% of the world’s population goes to bed on an empty stomach, and the list goes on and on endlessly. Many want to cut funding to help fight these humanitarian needs and the first thing that comes to their mind is huge spending on Space Research or Space Science. But they don’t realize that Space Research helps us tackle a lot of different problems. In this article we will discuss what Space Research is and how is it helping us in our daily life

What is Space Research?

Space Research is either carrying out scientific studies in outer space or studying outer space. The term includes scientific payloads at any altitude from deep space to low Earth orbit, extended to include sounding rocket research in the upper atmosphere, and high-altitude balloons. Some of the major space research agencies are National Aeronautics and Space Administration (NASA), Indian Space Research Organization (ISRO), SpaceX, Japan Aerospace Exploration Agency (JAXA) and European Space Agency (ESA).

Many may believe that its related to physics and astronomy only. But in reality Space Research has an diverse amount of fields few of which to name are:

Space Research - NASA's Hubble Space Telescope in Earth orbit
NASA’s Hubble Space Telescope in Earth orbit (Image: Space.com)

1. Astronomy or Astrophysics

Here they observe and study about the origin of universe and how does it work. They also discover and explore other stars and planets to find whether they could harbor life. Under this field, they have a no. of researches like Big Bang, Physics of the Cosmos, Dark Matter & Dark Energy and Black Holes to name a few. The Hubble Space Telescope has been getting images from very ends of the Universe which has helped us get a better understanding of the Universe

2. Earth Sciences

In this field, they study data on the science of the movement and composition of our planet’s atmosphere; land cover, land use and vegetation; ocean tides, temperatures and upper-ocean life; and ice on land and sea. It is carried out using observational data from satellites and instruments on the International Space Station (ISS). Another help of Earth Science is that it helps in tackling natural disasters like cyclones or dust storms

Astronauts Shane Kimbrough and Sandy Magnus play with floating food during the STS-126 shuttle mission. (Image: NASA)

3. Life Sciences

Here researches are mostly on astrobiology, origins of life, habitability, life in extreme environments. They experiments and observe the effects of space flight on the human body, radiation risks, and study other life sciences like microbes, plants and animals. Also a quarterly journal “Life Sciences in Space Research” is published. It is the official journal of Committee of Space Research (COSPAR) and editor-in-chief of this journal is Dr. Tom Hei (Columbia University Medical Centre)

4. Medicine

In this field, various experiments are undertaken to get a better understanding of various aspects of human health like aging, trauma and diseases. Many of the biological and human physiological experiments have yield successful results. Also a lot of new advances are being made in medical technology, telemedicine, physiological stress response system and cell behavior to name a few.

5. Physics

In this field, usually fundamental physics is studied. Some of the experiments that take place are study of matter, space and time; gravitational physics; understanding the source and location of signals; critical phenomena and the standard model. They are either observational or laboratorial studies. A few technological advances are also seen in lasers, Magnetic Resonance Imaging (MRIs), Computer devices, Atomic clocks and GPS.

Neil Alden Armstrong was first person to walk on the first manned space exploration mission to the Moon. (Image: Worcester Polytechnic Institute)

6. Space Exploration

It is the use of astronomy and space technology to explore outer space. It is mainly carried out by astronomers who observe, its physical exploration is done by unmanned robotic space probes and human spaceflight. Space exploration is the main sources for space science. Many experiments and missions are there under this field such as Starship which is being developed by SpaceX and a lot of upcoming manned spaceflight by NASA, ISRO, ESA and JAXA to name a few.

How is it helping us?

Well space research is helping us in a no. of ways in our daily lives to name a few:

1. Disaster Management

We tend to face a lot of natural disasters like floods, landslides, cyclones, forest fires, droughts, etc. Satellites provide us with synoptic observations of the natural disaster at regular intervals that help us in better planning and management of disasters. The satellite communication and navigation systems also play an major role in it by improved technological options. This not only helps only save people but also helps us tackle such disasters beforehand. One example of space technology being used in Disaster Management was

Hurricane Florence which was a Category 4 hurricane that hit USA in 2018 (Image: NASA)

Cyclone Fani (2019)

Cyclone Fani which was a extremely severe cyclone or a category 4 hurricane which originated in the southern Indian Ocean back in 2019 and caused damages of more than $8.1 billion . The fast response by 5 ISRO satellites which were INSAT-3D, INSAT-3DR, SCATSAT-1, Oceansat-2 and Megha-Tropiques which kept a close eye on the cyclone and studied the intensity, location and cloud cover around Cyclone Fani. This not only reduce the damage caused but also saved the lives of more than 1.2 million people from affected states of Odisha, Andhra Pradesh and West Bengal.

2. Clean Energy

Sending up a satellite with a huge fuel tank isn’t feasible, so researchers have to think beyond the box. Making sure that they have the ability to actually do their job is one of the most critical aspects of sending things like satellites into space. The improvement of solar cell technology has not only help improve satellites and to harness power from the Sun but also has helped in bringing down the cost and availability for everyone.

Space research has also contributed to battery technology—a crucial component of a viable solar energy system. It has also led to developments in technology where solar is used as a renewable fuel to break water into oxygen and hydrogen. This technology is already being used on Earth to curb pollution.

3. Water Purification

Water is one of the most precious and vital resource for human. But in space we are not able to carry large amounts of water as it will be very expensive to do so. But then for the astronauts to stays hydrated, researchers needed a way to purify and recyle water in a very effective way.

The water purification technology developed on the Apollo mission kills bacteria, viruses and algae, making water safe to consume. These water purification systems are now helping people in poorer countries that don’t have access to clean and clear water. The Water Security Corporation, in collaboration with other organizations, has deployed systems using NASA water-processing technology around the world.

4. Manufacturing of New Materials and Technology

At this time there are only a few materials that can only be made in the microgravity environment of space and have sufficient value back on Earth to justify its manufacture even at today’s high launch costs. The hallmark example is ZBLAN, a fiber optic material that may lead to much lower signal losses per length of fiber than anything that can be made on Earth.

Other on-orbit manufacturing projects underway on the ISS include bio-printing, industrial crystallization, super alloy casting, growing human stem cells, and ceramic stereolithography to name a few. To know more products check out Made in Space.

COVID-19 Vaccines: Sighting the End of Scientific Uncertainty

0

With the end of the year 2020 being a few weeks away, the thought amongst the people all over the world is whether the year 2021 will be free of COVID 19 (to see key vaccine stats; click here), also known as Wuhan virus, as Wuhan certainly happens to be the only source of virus spread. In the initial months of 2020, people thought of coronavirus as a localized affair in China but later on, it proved to be a worldwide pandemic. As this disease continues to create a series of downfalls in various aspects of human life in all parts of the world, vaccine development and its administration have become one of the most awaited events among others.

After a remarkably accelerated expansion period, two experimental COVID-19 (coronavirus) vaccines are almost set for primetime. Both deploy a technology called messenger RNA (mRNA), which has been investigated and analyzed with for decades in different forms but has never been used in a commercial vaccine. Pfizer and Moderna, the companies behind the two most hopeful mRNA vaccines, are building on years of research, and already have records of building some complex and cutting-edge therapies for other health conditions.

Countries have been affected by multiple waves of corona virus. 
Invention of vaccine has created a positive sentiment.
Countries have been affected by multiple waves of corona virus.
Invention of vaccine has created a positive sentiment.

The plot of the birth of the first vaccine

It was in the 1760s when smallpox killed nearly 400,000 Europeans every year, a strange rumour spread: People working in dairy never seemed to contract smallpox. This reached a surgeon’s assistant named Edward Jenner, he noticed that dairy workers had already been exposed to a similar disease-causing virus called cowpox. Jenner began to inoculate children with the cowpox virus, and ultimately, the worlds first vaccine was born

The postulate behind Jenner’s method— using a more decrepit version of a virus to create resistance to the more deadly version — is still used in some vaccines. But today there are many other methods for vaccine production, and these are being used to create new Covid-19 vaccines in record time.

So what is mRNA? Can this experimental technology actually be the secret weapon that saves the world from the coronavirus?

mRNA in Vaccines and Body Cells

mRNA does what its name indicates: pass genetic information to parts of our body, mostly to overwrite or erase the genetic information that’s already there and our body already deploys mechanisms that make and uses mRNA. It is one of three types of ribonucleic acid (RNA) that all work together to render pure genetic deoxyribonucleic acid (DNA) information into proteins in our bodies.

Image depicting the mRNA translation that happens after administering the vaccine.
Image describing the mRNA translation.
Credit: ASBMB

Consider the following analogy to clearly understand the mechanics involved :

Assume DNA is read only file on a computer and you insert a CD ROM to install MS Word 2000 in Windows XP PC in 2001. When the disk runs, required files are copied in the computer and original ones are untouched. Similarly, when a Google Drive file is shared, we need to copy it before updating. This is what RNA does, it “transcribes” DNA’s information into the initial steps of expressing those genes in our bodies.

Vaccines developed by both Pfizer and Moderna are based on mRNA and are nearing the end of clinical trials, while some countries have already begun administering them to front line medical personnel. Although vaccine with other mechanisms are also a considerable option- mRNA based happen to be fastest at this point of time. So far no other mRNA vaccine has made it to market, and these two are the first to be approved at least at few places.

Image of Pfizer and BioNTech's vaccine shot and syringe.
Image featuring Vaccine by Pfizer which has been recently approved in the US.
Credit: Financial Times

BioNTech, Pfizer’s technical partner and Moderna, both designed a tiny snip of genetic code that is to be introduced in cells to stimulate a coronavirus immune response. The two vaccines have different chemical composition, method of synthesis and mode of mRNA delivery. Also, these are to be administered twice with a duration of a few weeks.

The immune response is genetically encoded in some complex ways, which implies that when DNA doesn’t have a read-only installer for coronavirus response, synthetic mRNA can bluff the body mechanisms to manufacture the responses in some way. This genetic complexity is based on the randomness amongst nature and biochemical DNA. To read about random DNA synthesis paving ways of future biochemical methods. Click here.

Two sorts of vaccines are to be administered in duration of two weeks
Two sorts of vaccines are to be administered in duration of two weeks
Credit: TheWorld

How scientists from around the world explain this clever mechanisms

“We humans don’t need to intellectually work out how to make viruses; our bodies are already very, very good at incubating them. When the coronavirus infects us, it hijacks our cellular machinery, turning our cells into miniature factories that churn out infectious viruses. The mRNA vaccine makes this vulnerability into a strength. What if we can trick our own cells into making just one individually harmless, though very recognizable, viral protein? The coronavirus’s spike protein fits this description, and the instructions for making it can be encoded into genetic material called mRNA.”. As explained by The Atlantic’s Sarah Zhang.

Apparently, the vaccine alone cannot slow the dangerous trajectory of Coronavirus disease-related fatalities and health risks. But it clears the view of days where pandemic would be history.

Masks would play a key role in achieving those harmonious days. See how science has evolved with coronavirus masks that can kill pathogens on its surface.

Everytime we prevent infection spread by masking and maintaining social distance, we add to the possibility of forever prevention of Wuhan virus through vaccines.

Quantum Tech Breakthrough: ‘Bright’ Qubits synthesized for the first time

Japan is famous for its technology and rather I would say fast or superfast technology and now-a-days quantum technology. Whether it is transportation or computing; Japan has always led its way over all other countries around the globe. Recently with its mind-blowing speed of 415.5 petaflops; able to do trillions of calculations in blink of an eye; and a hefty price tag of 48-crore; Japan’s Fugaku is the world’s fastest supercomputer. The one thing powering this incredible machine is Quantum Bits.

Before discussing the newly synthesized chemical molecules which can be potential breakthrough in Qubits; let’s briefly know what actually qubits are for a quantum tech.

Qubits – The rise of quantum tech

For simplicity, we can say that they are carrying unimaginable amount of information in almost no amount of space. They are the basic unit of quantum information or quantum version of the classical binary ‘bit’; physically realized with a two-state device. In classical computing, everything is based on numbers ‘0’ and ‘1’. Thus, while processing info. bit is implemented to any of the one level. Whereas while considering qubits, it can be coherent superposition of both. This way it can hold more amount of info (as infinite superpositions possible).

Qubits – The rise of quantum tech
Qubits – The rise of quantum tech
(image: The Austin Chronicle)

Generally, qubits are made of same semiconducting material as usual electronics materials. But now for the first time chemists and physicists at Northwestern University and University of Chicago have developed a new method to create tailor-made qubits. This was done by chemically synthesizing molecules that encode quantum information into their magnetic or ‘spin’ states.

Molecular Qubits that respond to light

Spins in solid state such as quantum dots and defect centres in diamonds, can easily be controlled by light and can be used in quantum information processing. Here, the more challenging part is tuning their properties and making large arrays & this probably can be done easily with spins in molecules.

Central Chromium atom surrounded by organic ligands - Quantum tech
Central Chromium atom surrounded by organic ligands –
explaining spin and magnetic properties
(image: Google Patents)

For instance, molecules consisting of central Chromium atom, surrounded by organic ligands (ligands are simply the other molecules attached to central atom). The spin and optical properties of this complex can be tailored easily by just changing the positions of the methyl groups on the ligands.

Thus, this depicts that molecules which have tunable properties; especially spin and magnetic; are promising building blocks in quantum tech. Also, at the same time they can be easily assembled into scalable arrays and readily fitted into different devices. In typical molecular systems, addressing ground-state spins would enable wide range of application in quantum tech and information technology. However, this important functionality is difficult to find and not yet achieved.

Researchers have finally found a new bottom-up approach to develop molecules whose spin states can be used as qubits and can be instantly inculcated in the outside world.

Exciting the molecules…Varying the atoms…Producing the ‘Bright Qubits’

In this bottom-up approach, chemists and physicists used organometallic Chromium molecules to create a spin state that they could control with light and microwaves. They typically demonstrated such optical addressability in a series of synthesized organometallic Chromium (IV) molecules.

In new step toward quantum tech, scientists synthesize ‘bright’ quantum bits
In new step toward quantum tech, scientists synthesize ‘bright’ quantum bits (image: Northwestern)

By exciting the molecules with precisely controlled laser pulses and measuring the light emitted; scientists could actually ‘read’ the spin states in the molecules after being placed in a superposition. This particular information is the key requirement for using them in quantum tech. In addition to this, by atomistic modification of the molecular structure, they could successfully vary the spin and optical properties of these compounds. And as I indicated earlier that this is the important functionality difficult to find – which was achieved here.

This highlighted the promise for tailor-made molecular qubits and a new step towards designer quantum systems synthesized from bottom-up approach.

The Achievement for Quantum Tech

“Over the last few decades, optically addressable spins in semiconductors have been shown to be extremely powerful for applications including quantum-enhanced sensing. Translating the physics of these systems into a molecular architecture opens a powerful toolbox of synthetic chemistry to enable novel functionality that we are only just beginning to explore,” said on of the researchers.

This research has potentially opened up the new era of synthetic chemistry. Also, the bottom-approach offers both, functionalization of individual units as ‘designer qubits’ for target applications; and creation of arrays of readily controllable quantum states, opening the possibility of scalable quantum systems.

Molecular Qubits for quantum computing and scalable electronics (image: Microsoft Quantum)

One major application of such tunable ‘bright’ qubits could be quantum sensors. They can be designed to target specific molecules or could find specific cells within the body or detect when food spoils or even spot dangerous chemicals.

Thus, this new step towards quantum tech could ultimately lead to quantum states that have extraordinary flexibility and control which in turn paves the way for next generation quantum technology. Importantly, this bottom-up approach could also help integrate quantum tech with existing classical technologies. Certainly, the techniques of molecular designs to create new atomic-scale systems for quantum information science can be harnessed. Bringing these two sciences together will surely broaden the interest and has potential to enhance the quantum sensing and computation.

Interestingly in the past, using molecular systems in LEDs was a major transformation in the era of modern electronics – and something similar could be expected with molecular Qubits synthesized here.

This has laid the basis of wickedly strong future of quantum computing with its minuteness and created a paradigm for quantum information science.

OSD

To read about how quantum computing and quantum information technology has evolved in modern times. Click Here.

Superconductivity Observed At An Astounding 14° In A Strange Material

0

Normalcy is considered under-rated. In a normal world, things that tend to deviate from this line lead to inventions, breakthroughs, discoveries, and innovations. ‘Deviations create milestones.’ But in a CMP-World, Normalcy is the definition of a Revolution. For decades physicists have been trying to achieve an abnormal phenomenon called “Room Temperature Superconductivity”. And now researchers at the University of Rochester have managed to achieve this feat by creating exotic strange materials (how cool! 😎)

meissner effect; the most common property observed in superconductivity
Expulsion Of Magnetic field lines- Meissner Effect; a special trait of superconducting material {image- University of Rochester photo / J. Adam Fenster]

“This is the first time we can claim that we have achieved room temperature superconductivity,” said Iron Errea, condensed matter theorist at the University of Basque Country in Spain. “It’s like an undiscovered treasure trove with limitless applications”. Every year just in India, 19.33% of electricity loss occurs as transmission loss, with the world average wavering between 5-6%. Billions of dollars as zillions of electrons convert themselves into heat, and ultimately ashes. But, room-temperature superconductivity will cool these zapping electrons till their spines chill and spark. Lossless power lines, frictionless high-speed trains, and the list is huge if not endless.

room temperature superconductivity may lead to frictionless vehicle transmission
Room Temperature superconductivity has wide range of applications like frictionless high speed trains. [Image- youtube]

The remarkable material, which Ranga Dias likes to call “A Condensed Cocktail” exhibits superconducting behavior at a whooping 14° C. Not exactly room temperature, but a chilly room perhaps. The hydrogen, carbon, and sulfur compound has broken the record of the previous holder by a 50° gap.

The Strange Superconducting Material

Normal conductors have a typical resistance that opposed the flow of the electrons. But in 1911, H.K Onnes was the first to discover the phenomenon of superconductivity in a chilled 4.2K mercury wire. A simple explanation of this spectacle was provided in 1957 by BCS theory. it was first proposed by Cooper that electrons condense and form a pair which diminishes the resistance of the material. These pairs are the result of electrons’ interaction with lattice vibrations in the form of Phonons.

scientists that developed the BCS theory of superconductivity
BCS Theory of Superconductivity (below 30K) was established by Bardeen, Cooper and Schrieffer (left to right) [Image- brown university]

The underlying principle that basically won them the Nobel prize in 1972 for establishing the theory of superconductivity was Phonon Interaction. The elementary explanation is as follows: an electron moving through a lattice attracts positive charges towards it which in turn attracts another electron but of opposite spin. These two electrons become correlated (not entangled! there’s a difference). A multi-electron system will have multiple pairs. Each pair requires little energy to break the correlation. Lattice vibrations provide these energies but breaking one pair would change the entire state of the condensate. Therefore, energy to break one pair is directly related to all, and at low temperatures, there isn’t enough energy in lattice vibration to provide this kick. This way the pairs conduct current like cars moving orderly on an express highway with minimum chance of collision.

Superconductivity theory explained
The BCS theory of Superconductivity explained. A passing electron attracts positive charges towards it, in turn attracting another electron. [Image- makeagif]

Following the proposition of this theory, the quest for the discovery of ultimate superconducting material began. A material that could withstand everyday heat and prevent the interruption of electrons delicate dance. So in 1968, Mr. Neil Ashcroft, a solid-state physicist at Cornell University put forward an outstanding notion of using the lightest element of all: Tricky Hydrogen. But before you say that hydrogen gas behaves as literally the opposite of superconductivity, there’s a twist in the tale! The gas is squashed under tremendous pressure to transform it into a metal lattice. The idea of Ashcroft was that the hydride of some elements might be able to push that thermometer bar straight up, offering the wonders of superconductivity at ambient pressures.

So with high hopes and low-temperature scales, the hunt commenced. And voila, in 2015, a lab in Germany made a metallic hydrogen sulfide that was superconducting at 203K under 1.5 million times the atmospheric pressure. The same lab in 2019 synthesized a different compound, a lanthanum-based hydride that exhibited superconductivity at 250K but under 1.8 million times the atmospheric pressure (super-cool isn’t it?! 👌). Fueled by passion and guided by these discoveries, Dias’ team managed to break almost every single record set in the history of superconductivity. The Strange material indicated the signs of a possible new superconductor.

the team of University of Rochester with thr discovered superconducting material
UNLV physicist Ashkan Salamat (pictured), along with colleague Ranga Dias, assistant professor of physics and mechanical engineering at the University of Rochester, established room-temperature superconductivity in a diamond anvil cell. [Image- Josh Hawkins/UNLV Photo Services]

The Race For Supremacy In Superconductivity

The lab at the University of Rochester, tested a couple of hydrogen and sulfide compounds when it was confirmed that H3S is superconducting. Add too little hydrogen and the material won’t behave as a superconductor, add too much and the material will metalize at pressures that can crack any diamond anvil. It’s just like adding salt to your food. Along the way, the team smashed dozens of 3000$ anvils, comprising of 80% of their budget. 😮

The winning blueprint proved to be a riff on the 2015 formula. The scholars started with hydrogen, sulfide gas and added a tint of methane to it giving it a name- ‘Carbonaceous sulfur hydride‘. The lightness of hydrogen improved the lattice vibrations that steered cooper pairs to create a superconducting environment. But strong neighboring bonds also helps the system maintain this robust state and the carbon in methane did just that. The H2 + H2S system formed the precursor superconducting material H3S. Finally, they added sulfur to it and cranked up the heat using a laser. The whole arrangement was subjected to 4.0 GPa and a 532nm laser light that drove the photoscission of the S-S bond. The laser system created a strange material that began to crystallize under 147 GPa pressure. At nearly 220 GPa they observed a sharp spike in TC and at nearly 267 ± 10 GPa, a TC of 287 ± 0.1 K was recorded surpassing all the previous records. The catch here was that they accomplished this trick at a pressure 2 millions times the normal pressure humans bear, or equivalent to 75% of pressure at Earth’s core.

Diamond anvil cell helps to explore new materials for superconductivity. The figure depicts the working of an anvil cell.
How A Diamond Anvil Cell works. [Image- rochester.edu]
The graph that describes a steep rise in resistance as transition temp. increases
Superconducting transition at 272 GPa. It shows the superconducting transitions at ~280 K. TC was determined from the onset of superconductivity (see arrow). [Image- researchgate.net/nature]

“People talked about room temperature superconductivity forever”, said Pickard. He added “the pressure upon the team was high. People might not have appreciated that when we did it, we were going to do it under such high pressures.” A really huge twist occurred when the crystals made possessed a structure very different from that predicted. It’s like they knew it worked, but why, they had clearly no idea. Hydrogen is the smallest yet the most deceiving element that decided not to show its face in traditional probes of the lattice structure. And what can’t be seen can’t be mapped. Frankly speaking, they don’t know what they have on their hands, at least not the chemical formula. As the saying goes, “what comes easily goes twice as fast, but the hard part (mysterious) always lasts.”

Record-breaking room temperature superconductivity setup using diamond anvil cell and laser lights.
The setup by University of Rochester to achieve the long sought goal. Using diamond anvil cell and laser lights to synthesize the material. [Image- rochester.edu]

Once, they figure the twist out, the material may twist the world with its sheer simplicity. They can alter, add, modify the formula, and tailor it according to the application purpose. But as they say, it’s still a long way to go. Room Temperature superconductivity has been surmounted, now the aim is for Ambient atmospheric pressure. As challenging as it seems, the discovery would be a Paramount in Condensed Matter Physics World.

One of the advantages of encountering an Unknown Strange Rarity is that one can always anticipate surprising discoveries, giving us a peek into the unexplored and the outlandish.

R.D.X

Source- Room temperature superconductivity in carbonaceous sulfur hydride/researchgate.net

A Biochemical Random Number: DNA synthesis in absolute new way

2

Randomness of the nature or randomness created by humans plays a key role in functioning of the entire biochemical ecosystem. True random numbers are vastly required in various diversed fields; whether it is gambling, or a slot machine or in todays modern era of AI & robotics; in data encryption. Looking towards present and future of the human generation, the volume of securely encrypted data transmission required by todays network complexity of people, transactions and interactions increases continuously.

To ensure total security of these data, large volumes of true random numbers are required. These numbers should actually be random; such that even the people knowing the method used to generate them can’t predict.

Scientists Generate A Huge True Biological Random Number Using DNA Synthesis
Scientists Generate A Huge True Biological Random Number Using DNA Synthesis (image: Knowledge Area 51)

So, why all this relevant to biochemical ecosystem? How producing random numbers helping scientists?

Before we get answer to these curious questions, lets leap towards the area of randomness.

Coming Back To Where It All Started

When Francis Galton in 1890 demonstrated the simplest methods of generating random numbers; the era of Random Number Generators (RNGs) started. He demonstrated rolling dice, while saying their positions at the outset afford no perceptible clue to what they will be after even a single good shake and toss. The technological advancements thereafter increased the necessity of being able to generate large quantities of random numbers for societal needs. This led to series of technological breakthroughs, including the first generation of a hardware random number generator (RNG) into a real computer.

Random Number Generators (RNGs) - An important key in biochemical research
Random Number Generators (RNGs) – An important key in biochemical research (image: netclipart)

But modern world required the network security services, which introduced encryption and decryption schemes; generating high quality random numbers. Of today’s state-of-the-art RNGs, the Intel RNG provides 500 MB/s of throughput. Such hardware RNGs create bit streams depending on highly unpredictable physical processes, making them useful for secure data transmission as they are less prone to cryptanalytic attacks.

Looking application of RNGs in chemistry; where chemical reactions are statistical processes and formation of chemical reaction, follows a certain probability distribution. Although the expectation of products can be statistically predicted, being able to identify individual molecules after synthesis is rarely possible. Still, the current approaches are not being able to identify individual molecules results in the loss of randomness when analyzing certain chemical reactions. This is why; chemical reactions can’t be typically used as in RNGs.

This is however different from DNA synthesis. Also, the synthetic production of DNA is a very unpredictable chemical process. Certainly, it has various random probability distributions or patterns; which might be analyzed statistically but cannot be predicted precisely. But yet it has advantage that, individual molecules in the synthesized DNA sequence can easily be identified and at the same time analyzed by next generation sequencing (NGS) technologies.

DNA synthesis helping scientists generate a true biochemical random number
DNA synthesis helping scientists generate a true biochemical random number
(image: LABIOTECH.eu)

The Need of Artificial DNA Synthesis in Biochemical Ecosystem & generating random numbers

You must be thinking that why exactly there is need of artificial DNA synthesis?

Well, biology as a whole is vivid and complex science and artificial DNA synthesis is even more.

Before we jump to synthesis, lets know what is replication. It certainly is an essential process where cell divides and the two daughter cells must contain same genetic information. In nature, this happens independently; and so in our body while we grow. The artificial DNA synthesis gives great value in such gene-specific studies. This field has tremendous applications and advantages in various fields. This includes DNA sequencing and identification, various species identification, disease diagnosis and many more.

Randomness in nature - Biochemical rnadomness
Randomness in nature

Evidently, sequencing technologies to identify individual strands of DNA have been around since the late 1970s. In recent times, novel sequencing methods offer remarkable throughput and have enabled to read individual molecules. Thus, DNA can potentially be used as source of random number generation (RNG). In the past even, many researches have been conducted for using them as RNG, but were largely theoretical. Now, for the first time a non-physical method has been described for generation of such numbers.

This one uses biochemical signals and actually works in practice.

DNA Synthesis using Random Biochemical Building Blocks

For this newest approach, the ETH scientists applied the synthesis of DNA molecules. This is an established chemical research method frequently used since many years. It is generally used to synthesize precisely defined DNA sequence.

In the present case, the research team built DNA molecules with 64 biochemical building block positions. In this, one of the four DNA bases (A, C, G & T) were randomly located at each position. This was achieved by using mixture of four building blocks instead of just one, at every step of synthesis.

Interestingly, this method produced a combination of approximately three quadrillion individual molecules with a relatively simple synthesis. Afterwards, researchers used an effective method to determine the DNA sequence of five million of these molecules, which resulted in 12 Megabytes of Data. This was stored as zeros and ones on a computer.
The Four DNA bases A, C, G & T
The Four DNA bases A, C, G & T (image: National Human Genome Research Institute)

DNA Random Synthesis

This is the figure showing the DNA random synthesis. It also shows how bias in the entire procedure can be expected and how to overcome those.

Basically, DNA building blocks are mixed with the binding substrate before they enter into them. Here, they start forming a strand of DNA based on their coupling/binding efficiencies.

ri is the rate of individual nucleotides coupling.

It can be calculated by ki  x ci  

Where ki is respective rate constant and ci is nucleotide concentration.

Also, there are chances of individual nucleotides binding with other nucleotides during the process. This is being shielded by using protecting groups which ensures only one new nucleotide binds per DNA strand iteration. At the same time, excess nucleotides which have not found a DNA strand to bind to, are then removed from the synthesis chamber. And the DNA strands are deprotected.

To elongate each DNA strand to the desired length, the process of adding a mixture of nucleotides, washing off left-over and subsequently deprotecting is repeated as much time as required.  Once the desired length of DNA strand has been achieved, the DNA is cleaved from the synthesis support.

Small Space + Biochemical random number = Huge Quantity of Randomness

Although the researchers opted for bias free approach, analysis showed that distribution of those four biochemical building blocks A, C, G, T was not entirely even. Either the intricacies of nature or the synthesis method deployed led to the bases G and T being integrated more frequently in the molecules than the other two. But ETH scientists were able to solve even this issue with another simple algorithm and in turn generating perfect or true random numbers.

The main aim of team was to show that random occurrences in the chemical reactions can be exploited to generate perfect random numbers. Potentially, by synthesizing 204 µg of DNA, we have shown the possibility of synthesizing random numbers at a rate higher than 225 GB/s, offering volumes of up to 7 million GB of randomness for a cost of 0.000014 USD/GB (synthesis) and a fully scalable read-out on demand using Illumina sequencing technology.

Thus, in their work, researchers have taken advantage of the stochastic (unpredictable) properties of chemical reactions, generating true random numbers from DNA synthesis, offering a viable alternative for large volumes of randomness. While efforts are ongoing, reducing costs for reading and writing DNA, utilizing DNA as a commercial biochemical RNG could already be of interest today and in future to follow.

The study of randomness using biochemical methods; truly interlinks all three areas of science and in turn Humans to nature.

OSD

Apart from DNA, Proteins are essential biochemical components too. To get more details about “Proteins like never before have been synthesized in the year of 2019”; Click here

Quantum Reality: Understanding the Microscopic World

12

We all have heard the word ‘Quantum’ at some or other point in our life. If you haven’t then you are from some other universe. We all have heard about it despite the fact that most of us don’t know anything about it. Some of us might know ‘what’ it does, but nobody knows ‘how’ it does whatever it does. The Great Physicist and Nobel laureate Richard Feynman once said, “I think I can safely say that nobody really understands Quantum Mechanics.” So let us try to understand why such a great physicist like Feynman said such a thing.

What is Quantum?

First of all, let’s try to understand what quantum physics is all about. You might have heard something about a “quantum battery” which is supposed to run forever without any loss of energy or “quantum vehicles” which runs faster than the speed of light. Believe me, that’s not what quantum means. That doesn’t even qualify to be called quantum at all. Quantum Physics in a sense is the study of microscopic objects. When we go to bigger and bigger scales, we tend to call objects less and less quantum, i.e. classical, that’s why classical physics is the study of macroscopic objects.

How it all started?

Quantum physics is the best description of how nature works. It gives the best description of anything you can come across while performing an experiment. So let’s take a look at how it all started.

John Dalton proposed the theory of atoms, i.e. everything in the universe is made up of small indivisible particles called “atoms”. Later it was found that they are not indivisible but are composed of protons, electrons, neutrons, etc. And the picture of the atom we had was something like this

Quantum
(Image: ThoughtCo)

The same thing which we have in our thehavok logo. It described the motion of electrons around the nucleus just the same way planets move around the sun. But this model has many drawbacks and was discarded and upgraded many times in history. The origin of quantum mechanics can have two paths:

  1. Einstein stating that light shows both particle and wave behaviour.
  2.  De Broglie and Schrodinger stating that everything in the world shows both particle and wave behaviour. It behaves as a wave when don’t look at it, and as a particle when you look at it.

 A few years later, Erwin Schrodinger comes up with an equation which tells you how it all works.

He said that we have something called a wavefunction (𝛹) which is spread everywhere, but when we look at it, it gets localized and gives an outcome of the experiment and the above equation tells you how it evolves with time. This is the beginning of the quantum era.

What it all means?

There had been long debates between philosophers and physicists about all this. But most of them are due to the fact that we are still stuck to the language of “cause and effect” (you can read about it in my previous article “Physics and it’s not so popular language“), but the laws of physics do not speak that language, they speak the language of mathematics.

Even though mathematics was created by the human mind, its remarkable effectiveness in explaining the world does not extend to the mind itself. If you ask a physicist what is the meaning of all these equations in quantum mechanics textbooks, he will simply tell you that he doesn’t know. What he can certainly tell you is that if you perform a certain experiment, what is the probability of getting a given outcome.

Does this mean that the world is probabilistic, i.e. one of the possible outcomes of many? Well, we simply don’t know. There had been many interpretations of quantum mechanics. Two of which we will talk here are,

  1. Copenhagen interpretation
  2. Many world interpretation.

Copenhagen interpretation

Suppose you want to study the physics of an electron and its position in a certain situation. To describe the electron, we have something called the wavefunction(𝛹) which is spread everywhere.

The value of |𝛹|2 at each point will give you the probability of finding the electron at that point. When you measure the position of the electron, the wavefunction collapses at a single point and you will come to know where the electron is.

Or

(Image: Medium)

But this interpretation has “measurement problem”, i.e. what do you mean by measurement? How does the collapse happen? How quickly does it happen? The answers to all these questions are unknown.

All these questions can be solved by new alternative interpretations, like the many world interpretation.

Many world interpretation

There are three points to get started with this interpretation.

  1. The state of a quantum system is in superposition with all the states, i.e. to say the wavefunction is spread everywhere in Copenhagen interpretation. 
  2. You (as an observer ) are a part of the quantum mechanical system.
  3. You (observer) are entangled with the system, i.e. to say, we don’t have different wavefunctions for different things but we have a single wavefunction of the whole world.

Now you put the wavefunction in the equation and let it evolve. When you do the measurement, you see a result which is one of all the possible results.
Eg: you toss a coin, there are two things which can happen.

  1. Coin reads “head” and you read “head”.
  2. Coin reads “tail” and you read “tail”.

There can’t be a third possibility until and unless you are drunk.

So when you do the measurement and if you get “head” then you know which part of the wavefunction you are present in, in some other world there is another you(the observer) who gets “tail”. There is nothing like a collapse or something. This is the many world interpretation of quantum mechanics. If you ask me which one of them is “real”, then the most honest answer that  I’ll give,

“One of the best-kept secrets of science is that physicists have lost their grip on reality”

-Nick Herbert, Quantum Reality

Colossal Group of Bacteria that “eat” electricity discovered

Life on earth is ultimately powered by electrons. Most of the organisms, including humans, get electrons from food. Scientists from around the world seem amazed after discovering bacteria that “eat” electricity.

Some single-celled life forms have adopted an ultimate stripped-down diet. Presence or absence of food and oxygen doesn’t bother them, all they need is pure electrical energy to survive. These bizarre and seemingly unearthly organisms are easily found in muddy seabeds and along the banks of the rivers. Biologists require very less effort to coax them out of their concealment area, all thanks to their association with electrical energy, as by only sticking an electrode in sediments large populations of these bacteria can be collected. What else these bacteria have for the human world is their potential ability to clean toxic waste.

Bacteria feeding on electricity might sound similar to what science fiction novels depict, but these life forms don’t appear as exotic in behaviour and appearance. All different forms of life from microbes to blue whale, have to depend on a source of electrons to survive. The same electrons that zip around in electrical wires and form a circuit.

As mentioned earlier, the majority of life on the planet obtain electrons from the sugars present in the food. The electrons are released and flown into the oxygen that we breathe with the aid of a series of chemical reactions. This flow of electrons is what power ours our body. In a nutshell, the flow of electrons in oxygen atom is what powers the bodies of living entities, what would be the consequence if there is no oxygen to dump the electrons?

Image of bacteria : Geobacter metallireducens
Bacteria with hair like structures that conduct electrons
(image: sciencephotolibrary)

Life forms have been found in places with low oxygen levels and essentially depend on alternatives to dump in the electrons. In the quest to study this alternative mechanism, a group of scientists discovered a family of microbes called Geobacter metallireducens. This group of microbes were first discovered by Derek Lovely in 1987 on the banks of Potomac River near the Washington DC. Scientists on an encounter with these organisms noticed that organic materials, like ethanol, were the source of electrons alike other microbes but passing the electrons of iron oxide present in their vicinity makes them stand out. Essentially, this finding conveyed that it breathes iron oxide instead of oxygen.

However, this cannot be termed as breathing and apparently, bacteria do not have lungs or anything similar, bacteria just pass the electrons to the metal oxide present outside the cell body. Furthermore, these bacteria have fine hair-like wire attached to the cell body which resembles copper wire in functioning as an electrical conductor and hence termed as microbial nanowires.

Pollution, energy needs and Bacteria

Geobacter metallireducens and some other species have the ability to effectively “eat” pollution and waste from anthropogenic sources. Humanity and the environment can be largely benefitted from the bacterium’s ability to extract the dissolved radioactive elements in the groundwater and cure its contamination. Also, this bacterium is capable of converting organic compounds present in the oil spills to carbon dioxide. In fact, some people from scientific background claim that such bacteria could power microbial fuel cell using only seaweed, urine or sewage as their food. The goal of depending on recycled energy can be considered after the discovery of more such bacteria.

Bioremediation using Geobacter Metallireducnes bacteria
Bioremediation using Geobacter Metallireducnes bacteria
(image: slideshare)

Interestingly, Geobacter sulfurreducens also known as electricigens, are capable of forming metabolically active biofilms which are almost 50 μm thick and help in converting ethanoates to electricity. This property proves to be important for long-range electron transfer through the biofilms. A recently discovered mutant strain, Geobacter sulfurreducens, can provide the highest known current densities. More evaluations of similar species have been done and it was concluded that there are possibilities of adapting this organism to deliver even higher current densities.

Bacteria found in Human Bodies may have similar abilities!

Bacteria have been observed attaching themselves to conductive substances, such as the iron-rich mineral magnetite, in order to transfer electrons among each other through the magnetite. It is assumed that chains of magnetite can form, connecting the break between the electron-donating and the electron-accepting bacteria. The settings these bacteria occupy may seem strange, but electron-eating and -breathing microbes can also be found in more familiar settings. For instance, they have been identified in the digesters that turn brewery wastes to methane. Inside one such brewery, Geobacter metallireducens was directly transferring electrons to another bacterium, Methanosaeta harundinacea, which was then carrying the electrons on to carbon dioxide. It is even possible that microorganisms in the human gut electrically interact with cells in the gut lining.

Image featuring magnetite, an ore on which bacteria can feed.
Bacteria can feed on minerals such as magnetite, an iron rich mineral.
(image: sciencephotolibrary)

All such life forms in sparse environments point towards a possibility which has been a field of study for ages, existence of life on other worlds, such as Mars or Europa: Jupiter’s Moon. Astrobiologists who are in a continues quest to find the evidence of extraterrestrial life might find bacteria feeding on electricity interesting and a clue in the quest. Life on other planets might be discovered or remain a mystery, electricity eating and excreting bacteria are still a significant discovery and one can conclude that life forms at extreme pockets of earth are not yet explored.

All we need is to provide this bacteria with an electrode onto which they can breathe electron and also utilize their remarkable ability to steal electrons from toxic waste, cleaning up the human waste and generating electricity in the process.

Such Discoveries prove that current ecosystem is still full of mysteries; 
and scientists are in a quest to find possible life on exoplanets. 
This shows; as humans how desperate we are 
to know the unknown and find the unsolvable…

To read more articles on science, biology and interesting stuff like this; Click Here

Glitch in the matrix: The Ultimate Unknown Science

Déjà vu

Wait, have I been here before? Have you ever visited a place for the first time and had it feel eerily familiar? Or maybe you’re deep in conversation with a friend and you suddenly get the feeling that you have had the exact conversation before, even though you know that you haven’t.

(image: Alice Nerr)

Déjà vu is a French term, which construe as ‘already seen’. Markovitz(1952) spelled out it as a feeling that the current situation has previously been experienced, despite the circumstances of the past experience (that is, when, where, and how the earlier experience occurred) being uncertain and/or impossible. That’s spooky, some people consider déjà vu as a sign of recalling an experience from a past life. Consequently, déjà vu refers not to a prophetic vision, but rather, a false memory; a ‘glitch’ in a one’s understanding of what’s real.

** Studying directly glitch in the matrix can collapse your brain. For your safety, let us first find out how Glitch in the brain is affecting us and our day-to-day living with an interesting context of The Final Destination movie ! ! ! **

Glitch In The Brain

Many researchers thought that forgetting was a passive process in which, memories which are unused decays over time. But enhanced contemporary research says forgetting is active process. Our brain’s standard state is not to remember, but to forget. Oliver Hardt says, “To have proper memory function, you have to have forgetting. “The more often a memory is recall or look back on, memory sets off as encoded in both the hippocampus and the cortex. In due course, it exists separately in the cortex, where it is put away for long-term storage.

Can Science Explain Deja Vu?  
Or its just glitch in the brain? 
Or a glitch in the matrix
Can Science Explain Deja Vu?
Or its just glitch in the brain?
Or a glitch in the matrix (image: Pinterest)

There is no doubt that people experience glitches in their stream of perception of external reality. This is an usual topic of psychological study. One very customary subject matter of critical thinking and scientific skepticism is that we be intent on explain these apparent glitches as immense neurological phenomena.

Sometimes illness or drugs usher a heightened awareness causes hallucinations, which are confused with déjà vu. Precognitive experiences — experiences where someone gets a feeling that they know exactly what’s going to happen next, and it does (or prognosticators prophesies a future event). Precognitive dreams — dreams that give us ‘déjà vu’ inklings. In the general run of things, true déjá vu typically endured only for 10 to 30 seconds, in contrast these false memories or hallucinations can last much longer. (May this explanation remind you of Final Destination movie :))

Ergo, when someone experiences a discernible incongruity, such as seeing a ‘ghost’ or unexplained object and phenomena, something disappearing, or an amazing coincidence, the principle of neuropsychological meekness means that they should consider that the experience was a glitch of the brain function, not an accurate reflection of external reality. All neurological phenomena required to be modestly winnowed out prior to an exterior event is seriously entertained.

Our stream of experience is an extremely active constructive process. Perceptions are filtered, altered, enhanced, compared, matched to internal patterns, and altered again. Attention, cognitive biases, and expectations all shape our perceptions of reality.

Glitch in the brain; Déjà vu; Glitch in the matrix;  
what is the connection?
Glitch in the brain; Déjà vu; Glitch in the matrix;
what is the connection?
(image: GLP)

Within titular framework, the brain is not always functioning ‘normally’. We may be sleepy, drunk, highly emotional, or even experience seizures or similar neurological phenomena. But even when functioning perfectly, the brain is subject to glitches. Some researchers posit that similar neural misfiring – a glitch in the system – also causes healthy, seizure-free brains to experience a sense of familiarity when there’s no reason to. This psychologically false déjà vu caused by biological dysfunction(epilepsy),implicit familiarity and divided perception, subjective paranormal, schizophrenia.

To be specific, when we experience something weird, our first premise should be that the experience is an internal phenomenon, a reflection of a glitchy brain, not an external reality, a reflection of a glitchy reality. Parsing reality is horrendously complex. Our brains do an incredible job, but they are also an evolved mess and have serious limitations. It is no wonder that our perception of reality is a little fuzzy at the edges.

Glitch in the Matrix – Virtual Reality

This subject matter garnered all the attention and is very prominent among quantum physicists, neuropsychologists and science fiction movies and this idea has some high profile advocates because it seems to erode our very notion of reality.

In 2003,swedish philosopher Nick Bostrom professor at University of Oxford asserted three propositions from his growing body of work:

  1. The Human species is very likely to go extinct before reaching a ‘posthuman’ stage.
  2. Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history(or variations thereof).
  3. We are almost certainly living in a computer simulation.

There are already good reasons to think that we are living in a simulation. After all, this is all just surmises. Will we ever be able to find any evidences of this?

A glitch in the matrix - Virtual reality
A glitch in the matrix – Virtual reality
(image: johanna walderdorff)

The pre-eminent technique is to search for flaws in the program, such as glitches in the matrix. There might be lagniappe errors due to any rounding off or approximations in computations. For any event which possesses several possible outcomes, their probabilities should add up to 1.If they were amiss something, there could be such errors we can consider as glitches.

But our skeptic instinct are dubious about this belief. At an instant we make out our perception about universe that we live in simulation and the very other moment we feels that how can be that real. The just-risen sun shone softly on the city streets, bringing with flurry of morning activity with a cup of coffee in one hand or thick clouds blotted out the stars and inky blackness in the clear and cold night sky – how can such lavishness be faked?

The earliest Matrix movie was ingenious and unprecedented.(spoiler ahead :’) )

It delineates a dystopia future in which reality as discerned by most humans is actually a simulated reality called Matrix. In movie, they discovered that inside this simulation environment known as The Matrix, they can control, bend, and manipulate space-time. The effects like slow-motion “bullet time” were groundbreaking and revolutionary when we first saw them 20 years ago, which we now take for granted. The matrix is a virtual reality created by advanced intelligent machines to keep humanity in control whilst using them for energy to power themselves. This all occurred after man created AI and lost a war with them.

Consequently most of humanity was living, unbeknownst to them, in a digital simulated world. They were actually floating in pods, bungged in to an extensive computer. There are sporadic glitches, usually when those who control the Matrix instigate new code. This is experience by humans trapped in the Matrix as experiences of déjà vu, or errors in perception.

Keanu Reeves from The Matrix movie 
May be depicting Glitch in the matrix 20 years ago
Keanu Reeves from The Matrix movie
May be depicting Glitch in the matrix 20 years ago
(image: GIPHY)

This is was a gripping plot point because it was contrasting from main stream science fiction. Foreseeably since then, some people started to believe as if it applies to the real world and proving that we are actually living in the matrix. In an animated sequel ‘Animatrix’ glitches were avail to explain apparent paranormal activity. A haunted house was simply a computer glitch.

The universe emerged from emptiness randomly. It only appears the way it is because emergent conscious intelligence make this so. It is incomprehensible to say that anything existed before, since there was no time and all probabilities are simultaneous in one eternal moment of creation, the ‘zero point’. Energy is zero, or undefined, at the zero-point so something has to destabilize the quantum stillness to initiate vibrations.

Glitch in the Matrix - Virtual Reality

Quantum phenomena requires an observer. If there is no observer the universe remains quantum and undefined and this universe would be empty and meaningless. The observation is essential for reality to manifest. With every possible state of existence of wavefunction the reality split into worlds according to outcomes. It would be tremendous to see if quantum consciousness researchers can run tests to determine if scientists are discovering what they are looking for by collapsing wave functions of smaller and smaller, or more and more distant things, via technology; thus extending our lens of observation.

If we had an adequate powerful quantum computer networked to consciousness, we would create a quantum framework from which classical reality arises. The quantum can be created on super-super-D-wave as a simulation with replicated quantum physics and fields identical to those of our universe. This simulation seems fine-tuned mystery. But this discussion hold off to get us anywhere.

From the Dark fans 
May be we as humans are a Glitch in the Matrix
From the Dark fans
May be we as humans are a Glitch in the Matrix (image: Netflix)

There is an abstraction of base reality, insinuates that there are multiple layers of reality, with the topmost Brobdingnagian layer of reality as it most ostensibly materialize to us and the bottom layer contained with reality as it actually is. There are another depictions of reality as well,such as perceived, unobservable, hypothetical, subjective, intersubjective etc. However this concept of base reality seems more cogent and persuasive because it’s potentially inclusive of all realities.

We all are mired in conflict So what do you want to do stay down and drive yourself crazy with questions or move on without answers

Do we see reality as it is? Our senses just mediocre inputs of our brain? That all we have is garbled reality, a fuzzy picture we will never truly makeout. Maybe truth don’t even exist? Maybe what we think is all we got.

It’s the illusion of the choices

Or our choices are prepaid for us popping with a relevation you’ve secretly known all along.
You’re the only force of nature at play here. Our subconscious, running in the background, making is doubt.

Atomic Dhruvi

Spicing Up Your Security: Salt and Pepper To Make Your Password Hash More Secure…

Today we live in an era where data is more valuable than life. Data security has become the chief aspect, with websites spending a large amount of their fortune just to keep their data safe. Ever wondered how your passwords are kept safe? Well, it is with salt and pepper.
Wait wait!!! I’m not talking about the salt and pepper you add to your seasoned breakfast. What I am talking about is the cryptographic salt and pepper that spice up your hash to make it next to impossible to decrypt.

So what exactly is happening here? What is a hash? What is salt? What is pepper? Why is it needed?

Internet is an enormous place where billions of people store their data, and privacy is the key factor any website can offer. To maintain this, a system of login credentials, i.e. username and password is used to keep the data safe. The password acts as a key, only people with the right password can access the data. This seems to be a secure system, security breaches now have to face an extra wall and crack the password to steal data. 

But, is it really so?

What if……. the website’s database containing all the users’ password is hacked????
That becomes a highway for the hackers to steal all data from any user they wish.

So the websites incorporate something called hashes. They are cryptographic functions that take in a string of characters and outputs a string of jumbled characters that looks nothing like the original string.
For example,

 password@123 -> 8e7152d0eb52c340579f2d70a28eaf1a2c5ba1c5
Hash
The schema of hashing, the function takes in a large string in plaintext format and throws out a string of jumbled characters.
[Image: Binance Academy]

These hashing algorithms are written in such a way that even a small change in a single character gives a huge change in the output string. For example,

abcdef -> 1f8ac10f23c5b5bc1167bda84b833e5c057a77d2
abcdeg -> d8c771ae55b9a436034c64e1c0e0ff8b876a5faf

It is evident that changing the last ‘f’ into ‘g’ caused a huge change in the hash output, showing how untraceable these are.

{{CODEmyjscode}}

To the geeks reading this post, here is the JavaScript for the SHA-1 algorithm

A string, when fed to a hashing algorithm, always gives the same output, and these hashes are practically impossible to reverse. And… Eureka!!! Here is a perfect way for you to store the passwords.

The websites store the hash of your passwords, and when you login, they run your password through an algorithm and compare it with the stored hash. If the generated and one in the database match, then access is granted to the user. In the case of a database security breach, the passwords are still safe since there is no way the stealer can reverse the hashing algorithm to squeeze out the original password from the hash. So hashes can be called as the best way to protect your passwords.

Or at least theoretically…

Why is the spicing up needed???

salt and pepper
[Image: eposts]

Well, just now we saw that hashes are the “ULTIMATE” security level and are uncrackable, so you might be wondering why salt and pepper are needed.

The answer lies in human behaviour.

Humans are very predictable beings. We tend to create passwords based on familiar keywords, names or dates. Someone named Bob, whose birthday is on 10th June will most probably set his password as Bob@1006 or Bob@123 or 1006_Bob and so on. This gives an advantage to the hacker. Such information about a person is easy to gather.

The hacker can very easily create a list of all passwords that a person might possibly set. Trying out each and every password in the list gives a high probability of success of finding the right combination. Thus a person’s predictability makes him prone to the attack (This is called a dictionary attack).

Here is a list of the most common passwords of 2020. This just shows how easy it is to predict human behavior.

Furthermore, the hackers could also get the hashing algorithm into their hands, and create a table of the hashes of all such common passwords (called the rainbow table). Then just comparing their hash database with the rainbow table can give them the right combination.   

This is where the spicing up part enters. Salt is a random string of characters added to the input password before it is hashed. So if you enter the password as password@123, the website backend automatically changes it into something like &*(^z1password@123 and then hashes it. This makes the password more uncrackable.

This salt is stored in the database along with the hash itself like e64854077f4306d070790e521eadfad7de5ab9d7@r^$#.z1.
This might seem counterintuitive, but since every hash is stored with a different salt, to create a rainbow table the hacker needs to make different tables for different salts. This renders the rainbow table attack useless since salt makes a password longer and more complex to assume. But once one knows the salt, dictionary attacks are still a threat to the system.

salt example
An example of how the salt is added to the password.
[Image: Michal Špaček]


So if salt is not enough, spice up your food with pepper *shrugs*.
Well, pepper is similar to salt but is generally shorter than a salt.
Waaaait a second… doesn’t that make it weaker than salt?

Simple answer- NO.

This is because the pepper is not stored at all. Not even the website backend knows what is the pepper string, or which position it is added to the password. When the user inputs the password to log in, the backend algorithm hashes all the possible pepper combination and checks each and every output against the stored hash. If one of them matches, then only the user is given access. This is entirely foolproof, no rainbow table can break this level of security, nor any dictionary attack, rendering all the efforts of a hacker useless.

pepper
[Image: PagerDuty]

So salt and pepper not only enhance your seasoned breakfast, but they also do spice up your security. But just as they say that good and evil are the two faces of the same coin, every development always brings some shortcoming to it. So, these hashing functions are also not fully foolproof. There is something called the pigeonhole principle which states that if n items are put into m containers, with n > m, then at least one container must contain more than one item. 

The hash created is only 40 characters long and contains only lower case letters and numbers, but the input string can be of any length and can include upper case and lower case letters, symbols and also numbers. Thus the number of possible input strings is more than the number of possible hash combinations. So ultimately two strings somewhere out there have their fates intertwined and are destined to have the same hash. Recently such a collision has been observed.

But it is not a matter of worry because such a collision is very rare and is very difficult to obtain. So right now none of you really need to worry about your passwords being stolen or hashes being broken and can sleep peacefully with your privacy assured.

Ending Cheating In Gaming: New GCI Developed By Computer Scientists

13

The computer gaming industry is one of the largest entertainment industries today. They have a market value of thousands of billions of dollars. Such hefty sum of money is generated from millions of gamers and players all over the globe playing endlessly and tirelessly. When games like PUBG, Counter-Strike, Call of Duty, etc are played under a common server online; are called Massive Multiplayer Online Games (MMOGs). They share lots and lots of data continuously on the online server. Because of such open network, players deploy cheats while playing, either for profit or for fun.

Today, cheating while gaming is trending and it became a challenge for the developers to overcome such issues. For instance, a game player may deploy a cheating mechanism to collect a large amount of in-game virtual currency or assets by using an automated program called game bots and then trade it for real cash value. Such malicious behavior affects the overall gaming experience. More importantly, it gives unfair advantage to cheaters over other naïve players. Also, this at the end affects the popularity and image of the game in the market.

Ending Cheating In Gaming: New GCI Developed By Computer Scientists
Image: ALTAZ.in

But computer scientists at University of Texas at Dallas have come up with a counteroffensive to this. They have devised a new weapon against video game players who cheat.

Let us see how computers science is all set to take down those cheaters and ending cheating in gaming.

Basics of the GCI and Approach of the Scientists

Game developers invest large amount of efforts to detect and prevent cheats that provide an unfair advantage to cheaters over other naive users during game play. Typically, Massive multiple online game (MMOG) clients share data with the server during game play. Developers leverage this data to detect cheating and mal-practices during the game play. However, detecting cheats is not easy because of limited client-side information. Also, while game software developers strive to implement mechanisms to detect various cheating behaviours for preventing them, cheaters find techniques to evade such detection mechanisms.

So, GCI literally means Game Cheating Identification.

Scientists prepared a training set which contained game traffic data of players with labels (cheats, normal). The test was then randomly splitted into two sets. One was with training set for setting up the classification model. And other for testing the performance of the entire approach.   

They assumed that there exists distribution bias between training set and test set due to sampling bias. This is called covariate shift. It is basically the change in the distribution of the input variables present in the training and the test data. 

Overview of GCI framework helping to end cheating in gaming

Due to the assumption of covariate shift, they used Gaussian kernel model (is a complex approach. Click Here to learn more) to estimate α. It is relative density ratio associated with the training instances. Since the instance weights aid in correcting the distribution bias of training data instances, a classifier training using the weighted labelled instances can generalize well on test instances. As a result, GCI trains a classifier using the weighted training instances for label prediction on test instances. The bias correction phase terminates by producing a bias-corrected classifier. At the subsequent occurrence of every new instance along the test set, the classifier prediction is used for identify game cheaters.

Deployment of Framework into the Gaming Server

This GCI framework for cheat identification is then deployed into the game server side. This helps in cheat detection over encrypted network traffic as shown in figure below. Although the traffic is decrypted at the server-side, computer scientists aim to perform detection over encrypted traffic. Main reason for doing this to have game-independent method for cheat detection. And using decrypted network involves dependent variables, which are obviously game specific. Also, it is easier to evaluate over encrypted traffic, as most of the games are not open-source.

Since, the mechanism is not game specific, it can be deployed for cheat identification on the client side as well. Naturally, a major concern is the availability of a secure mechanism which is tamperproof. This is to prevent adversaries from compromising the detection system, or worst, disable it.

Overview of architecture and its deployment to end cheating in gaming

Using hardware-based cryptographically secure environments such as Intel Software Guard Extensions (SGX) has become a fascinating platform. They help to execute software securely in an untrusted environment. Intel SGX, an extension to the Intel x86 architecture, is a hardware assisted trusted execution environment that allows trusted part of an application to be executed in a secure area of memory called an Enclave.

Application developers can protect their code and data from modification or disclosure by an adversary by deploying it within an Enclave. If a gamer uses an SGX-enabled machine, we can deploy a similar cheat detection mechanism directly on the client-side, as illustrated in Figure. However, computer scientists has left its demonstration for future work.

The Role of GPU and Ending Cheating in Gaming

A GPU is a co-processor alongside the CPU, which is efficient for handling graphic specific calculations and image processing.  GPU has a flexible parallel structure for high performance computing. That is why, leading vendors have turned modern GPUs into fully programmable co-processors with their own Software Development Kit. Therefore, computer scientists have utilized high memory bandwidth and powerful parallel processing capability of GPU to make their mechanism faster. And hence at the same time reduce the processing burden from the CPU.

Using GPU for GCI, performance was entire technique was improved. For the experiments, computer scientists have used NVIDIA GeForce RTX 2080 Ti GPU, which contains 4352 NVIDIA CUDA® Cores and 11 GB GDDR6 memory. The installed version of NVIDIA CUDA framework is 10.0.130.

At first, we test our GPU-based GCI technique with the normal execution of GCI. We use 10 different group of data sets and record the classification accuracy and execution time of GPU based GCI.

Comparison of performance between GCI and GPU based GCI for gaming
Comparison of performance between GCI and GPU based GCI

From the table, it is apparent that the proposed GPU-based GCI outperforms GCI in terms of execution time for both multi-class and binary data set experiments.

Summarizing and Concluding things

By monitoring the data traffic obtained from the players, researchers identified patterns that indicated cheating. They then used that information to train a machine-learning model, a form of artificial intelligence, to predict cheating based on patterns and features in the game data.

The researchers adjusted their statistical model, based on a small set of gamers, to work for larger populations. Part of the cheat identification mechanism involved sending the data traffic to a graphics processing unit, which is a parallel server, to make the process faster and take the workload off the main server’s central processing unit. The researchers plan to extend their work to create an approach for games that do not use a client-server architecture and to make the detection mechanism more secure.

Interestingly, major part of research was done by playing MMOG Counter-Strike online. Also, they utilised three gaming cheats: AimBot, Speedhack and Wallhack. AimBot automatically targets the opponents; Speedhack allows player to move faster; while Wallhack makes wall transparent which makes the player to see their opponent easily. Computer scientists have potentially succeeded in making counteroffensive against these cheats, and have ensured that games like Counter-Strike and other MMOGs remain complete fun and fair for all players. Also, while testing Counter-Strike, they gave warning and gracefully kicked the player out, if they continue with cheating during a fixed time interval.

With such interesting games out in the market, computer scientists are now also focusing and ensuring that they remain interesting. Conducting such detailed research for gaming industry, opens new paths for this fast growing multi-million dollar industry. At them same time it uplifts the level of gaming experience that developers need to provide, and which players demand for.

OSD

For more exciting things on gaming Click Here.

To read full research paper; Click Here.

Weather-Proof Chips: Taking The Modern Communication Technology To Next Level

Todays modern communication systems widely rely on how fast the information can transmit. At present, these communication systems rely on similar formula and follow similar process to transmit data. Devices, data-centres, towers and satellites are the common ingredients for transfer of information at present. This makes the communication system dependent on various factors that may slow down the entire process of transfer of data. Mainly this includes, geographical conditions and weather. To overcome this and make transfer of data simpler and faster, University of Texas at Austin have come up with the so-called “Weather proof chips”.

Let’s see how these “Weather proof chips” can change the future scenario of self-driven cars, military communications and other things that we have not even thought of yet.

**Caution** Don’t scroll down directly to the end of this article. There is some insane technology mentioned down there.

But before that, lets have quick knowledge of principle behind this technology.

Little Background about Weather Proof Chips:

It basically works on the principle of beam steering. In this, light is re-directed in the direction of a specific target, which is done by changing the direction of main lobe of the radiation pattern. This is widely used in the radio systems, radar systems, acoustics, optical systems and now in making weather proof chips. So; why specifically this principle is used?

Integration of mm-wave beam steering and optical wave beam steering, principle used in weather proof chips
Integration of mm-wave beam steering and optical wave beam steering
This is just example of how Beam steering can change things potentially
(image: ResearchGate)

Because, this allows signals to be transmitted more accurately than other methods. This in turn reduces interference and power is also saved while using it. But with this, beam steering also has a disadvantage that, it can only bounce light in narrow directions. Simply this can be compared with a person having a poor peripheral vision and how he cannot see things beyond certain extent. But researchers had solution to this also, and these weather proof chips featured much wider angles. They increased the range of steering light by 30 degrees.

Also, this was possible because of the material they used – Indium Phosphide (InP). So, the beam-steering technology was monolithically integrated with InP-based Quantum Cascade Lasers (QCLs).

(The study of QCL is a vast topic and is complicated. For readers interested in QCLs click here to know more)

Further About the Technology:

This above mentioned principle, allows the researchers to tune the chips in such a way that it can operate in a specified area of light spectrum. This light spectrum is – mid infrared region – which allows the signals to penetrate through clouds, rain, and other bad weather conditions. Also, it helps the signals to transmit to intended target without shedding and loosing significant amount of light. Thus, low light loss means signals can travel farther through the Earth’s atmosphere. That too, with better integrity and less power consumption.

Along with this, the device’s performance can also be improved by applying changes to the phase-shifters and waveguides that change the current inside the device. However, many enhancements have already been observed in the device. For instance, improvements in optical phased arrays (OPAs). OPAs are simply a technology used in optical sensors these days. They especially enable random-access pointing programmable multiple simultaneous beams, a dynamic focus/defocus capability, and moderate to excellent optical power handling capability at the same time making the device light-weight.

Schematic illustrations of the OPA device (not to scale) used in weather proof chips
Schematic illustrations of the OPA device (not to scale). (a) Entire layout with input and output light indicated.
Also indicated in (c) is the current flow for resistive heating of the phase shifter. (image: OSA Publishing)

Also as mentioned earlier, this chips work in mid infrared region, there exists wide range of possibility for development in that. Meaning,  this work contributes to the experimental progress in the burgeoning field of mid-infrared integrated photonics. As more devices are demonstrated and optimized, and integration schemes mature, more efficient and compact chips for a wider range of applications in this rich spectral region can be realized.

Taking Self-Driven Vehicle Technology to a next level

As one application of weather proof chips, autonomous vehicles were tested. Currently, self-driven cars are equipped with those weird and bulky devices on their top. They are LIDAR system. LIDAR stands for Light Detection and Ranging. Basically, it detects the surroundings using light in the form of pulsed laser and measures varying distances to the Earth. Typically LIDAR devices have to spin continuously due to limited field of vision. And, anytime you rely on a moving system, there is a chance of it breaking.

Typical LIDAR sensor working in autonomous vehicle
Typical LIDAR sensor working in autonomous vehicle (image: metrology.news)

These weather proof chips, don’t have to move or spin because they have a wider field of vision. Also, as movement is decreased; the problem of light trailing off in various directions and decreasing efficiency; were vanished. Importantly, when beam steering is to be used, there should be no blind spots. Current LIDAR systems created many blind spots, as they spin continuously, creation of blind spots is obvious. While using this weather proof chips, this risk is minimalized. And fewer blind spots in the technology increases safety in situations where momentary lapses can prove dangerous.

Changing the Future by Creating it – Witness with weather proof chips

Apart from using weather proof chips in automobile industry, researchers claimed other interesting applications too. They can potentially be integrated into everything from military purposes, to satellites, to skyscrapers. Research I being going on to infuse artificial intelligence into the device for environment sensing. Interestingly, the working of this weather proof chip is in mid-infrared region. In this spectrum of light, humans can’t see (without aids like night-vision googles), but devices can. In this range, such devices can pick up things like gas leakages and smoke stack emissions.

This is the era of ever-growing internet speeds, these chips can be a potential equipment to use. In big cities, where its not practical to dig deeply underground to lay fibre cables, these devices can increase internet speeds (and probably increase wireless data speed for 6G networking).

Weather proof chips opening new pathways for free-space optical communications
Weather proof chips opening new pathways for free-space optical communications (image: Amonics)

The most insane application, yet a dream of every future skyscraper, is putting these chips atop high-rise buildings. It can enable free-space optical communication, a efficient technology allowing wireless data to travel through the air using light. And it is interesting to know that, the next big project of researchers is field-testing the device and refining its packaging to enable its application in free-space optical communications.

With such advancements in technology, overcoming the mother nature’s barriers to humans now seem easy.

OSD

Quantum Computing is Here To Disrupt Your Privacy Again

We all know that the Quantum Computing Industry observed a boom in resources and funding over the last 2 decades. The largest of the Tech companies want to build a piece of this remarkable machinery and bag the title of “Quantum Supremacy.” The term itself is very hard to define, in other words, it is ineffable!

Now a quick recap: In the last article, I talked about the basics of quantum computing and the very basics of the RSA algorithm. I mentioned something about the Lightsaber if you can recall. Can’t recall here’s a link. (In case! 😜)

Diving right into the topic
Let’s dive right in, shall we? (Image: Giphy)

The Ultimate Quantum Computing Weapon

In the year 1980, then very famous Richard Feynman sir tried his hand at Quantum Computing. He proposed a very crazy idea that if we could somehow develop a machine that reads qubits, that would be the biggest miracle of mankind so far. The possibilities he could think of were endless. Besides doing tasks that supercomputers weren’t capable of doing (or took a long time 🥱), scientists at that time did not seem to have anything on the plate that would attract the attention of brilliant minds or heavy pockets.

Richard Feynman and Quantum computing thought
Mr. Feynman explaining Quantum computing lectures. (Image: ayltai.medium)

The year 1994 changed the whole game when Peter Shor, an ex-Caltech sophomore published a paper that became the Crown Jewel of Quantum Computing syllabus. A paper that would set a roadmap for the future of Quantum Computing.

The Lightsaber: Shor’s Algorithm

The suspense is over now. As sketchy and techy as the name I gave it seems, this isn’t some kind of algorithm you can carry in your pocket, plug-in, and steal everything. It requires two major things, firstly a quantum computer and secondly error-reduction methods.

Ex-Sophomore of Caltech Peter Shor
Peter Shor the inventor of Shor’s algorithm. Professor of Applied Mathematics at MIT. (Image: Nature.com)

But let us first hear what Peter Shor has to say about his algorithm in his own words. “At first, I had only an intermediate result. I gave a talk about it at Bell Labs [in New Providence, New Jersey, where I was working at the time] on a Tuesday in April 1994. The news spread amazingly fast, and that weekend, computer scientist Umesh Vazirani called me. He said, “I hear you can factor on a quantum computer, tell me how it works.” At that point, I had not solved the factoring problem. I don’t know if you know the children’s game ‘telephone’, but somehow in five days, my result had turned into factoring as people were telling each other about it. And in those five days, I had solved factoring as well, so I could tell Umesh how to do it.”

In the previous article, I discussed the working basics of the RSA algorithm. All the encryption systems today use a composite number(N) that is a product of two co-prime numbers. The numbers are in the range of 200 digits. Shor’s algorithm provides an upper hand by providing a way to factor the number N and hence break the encryption.

Deciphering the Shor’s Algorithm

Now, this remarkable algorithm works in a certain number of steps. Any prior mathematical knowledge would prove beneficial now. (Just kidding! 😂)

Firstly, we have to choose a number ‘g’ (random) < N (composite number), a step that can be classically performed by human minds as well as traditional computers. One thing I’d like to mention is that ‘g’ doesn’t need to be an exact factor of N, but can also share factors with N.

Now I need you to understand one thing, that if A and B are co-prime then An = m*B + 1. (This is a proven formula.) Take 7 and 50, they don’t share a factor but 74 = 2401= 48*50 +1.

So our random guess follows gp = m*N +1 (p ≠ 0). Solving further gp -1 =m*N can be resolved into:

(gp/2 + 1)*( gp/2 – 1)= m*N

The terms on the left-hand side are the new and improved guesses. No matter if the terms are multiples of the factors of N, we have a pretty darn 2000-year-old efficient algorithm called “Euclid’s algorithm.”

The working of Shor's algorithm
How Shor’s algorithm works.

This is all folks, the algorithm that can break your privacy. But wait, you will say “If this is so simple, why can’t we run it on today’s computer already? 🤷‍♂️” I might say there are some major hurdles in between.

Traditional computers don’t have that much processing power to break the encryption. For reference, a normal desktop would take 2000 years to break a 78 digit long encryption. Secondly, there’s a slight chance that one of the numbers on the left side might be a multiple of N, in that case, the other would be a factor of ‘m’. Thirdly, the power ‘p’ might be odd, because of which we can’t use it.

But 37.5% of chance this algorithm works, great! But wait why is a Quantum Computer exactly needed? How do we find “p”? The answer lies in Quantum mechanics itself.

The Real Thief Is Here

The sole difference between quantum and classical computers is that quantum can perform operations simultaneously. Now, let’s take another guess ‘x’ and raise ‘g’ to that number. The classical would do this individually but quantum speeds up the process ridiculously fast. All thanks to Quantum Superposition.

a cartoon depicting the quantum superposition phenomenon
Cartoon that explains the weird concept of Quantum superposition. [Source- Pinterest]

It takes all the values of ‘x’ at and raises ‘g’ to that power. After that, we calculate the remainder(r) when divided by N and save it. We can’t measure directly because of the Quantum Measurement problem that would destroy the superposition. So we measure only the remainder ‘r’ of the saved superposition and we get a random number say ‘x’. There’s something special here that if we observe (not measure) the superposition we are left purely with the powers that resulted in the remainder ‘x’.

Measuring only the remainder, you are left with superposition of all possible states resulting in that remainder.
The superposition of all the possible states resulting in a certain remainder. (Image: minutephysics (YouTube))

How does this benefit us? It’s because of the repeating nature of power ‘p’. All the powers are period- ‘p’ apart. So the only challenge is to measure this period or Frequency and viola, you have the key!

You guessed it right! There’s also a twist here, to find ‘p’ you need to understand the crux of Shor’s Algorithm called the “Quantum Fourier Transform.

how QFT works
The working of QFT to give a frequency as output. (Image: minutephysics (YouTube))

The Jewel Of The Crown Jewel

While working of QFT is too complex to describe and we have lost a hint of details in previous sections. Thus, being a tad subtle here, I would like to mention that FT is widely used to find frequencies.

QFT working basics
QFT working described using Hadamard gate and Unitary rotation of individual qubits. (Image: Wikipedia/QFT)

Given an audio signal (wave) as input, FT changes the amplitudes converting it into a frequency graph. With a graph at your disposal, you can find the frequency and ultimately period.

Above all, I mentioned that by measuring ‘r’ we are left with a superposition of all states with remainder ‘x’. Every state results in a sin wave, with constructively and destructively interfering producing a wave with a frequency of . And as long as ‘p’ is even, and not a factor of N, you have The KEY.

The Post Quantum Computing Era

The subject of speculation; Is our data safe?
A cartoon describing the common questions regarding privacy.

To sum up, we explored how Quantum computing algorithms can prove a risk to the Internet’s security. Are we there yet? As much as I hate to be the bearer of the bad news, the answer is NO! The encryption algorithms RSA-768, RSA- 2048 are so big that we can factor them today even on World’s largest Q.Cs.

The Q.Cs need roughly 5900 qubits to factor a small RSA-230 algorithm. And there’s been a major wave of excitement because D-wave has constructed a 5900 qubits Q.C. Maybe one day, we would have a Quantum Internet that would take the cryptographic systems to a whole new level.

A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.

Alan Turing

A vote of thanks to Minutephysics, nature.com and Qiskit.

Physics and it’s not so popular language

15

Physics has gone under a big revolution since many years updating its theories from time to time. There are not only experimental evidences to follow but also philosophical debates to pass through. Every new and wonderful theory of physics has always been debated of its implications and methodology. These theories are even discarded sometimes due to philosophical reasons and come-up with new explanations. There have been many such incidents regarding Quantum Mechanics, but still being one of the most wonderful theories. The road the physics has taken is very much wonderful in its sense.

Let’s take a look at part of such a journey. For that we have start from the beginning forgetting all the laws we know for a while.

A New Beginning

Suppose there is a book kept on the table in front of you, when you look at it, it does nothing. When you push, it starts moving and when you stop pushing it, it stops moving. So you can say you’ve discovered something. Aristotle did the same and concluded that there is a natural state of everything i.e. to be motionless. When you apply a force on it, it comes out of its natural state and starts moving, when you remove the force, it comes back to its natural state i.e. at rest. Congratulations, You’ve discovered a law that governs the motion of object and anything you see.

If you’re brave enough to think, you might come up with metaphysics that “for every motion, there is a mover” i.e. if you see something moving that means there is something moving it, then there should be something that is moving previous mover. If you trace the chain of moving and mover you might come to an ultimate unmoved mover i.e. God.

Book lying on surface; how language of physics has changed

We know today that the laws of physics don’t work this way that this principle of “cause and effect” is not true.

               From years, there has been a philosophical tradition of having the basic understanding of things, that there is a deeper understanding of physics which states that “everything happens for a reason.” There is a cause for everything i.e. if you see something moving that is because something else is moving it. This is not a statement out of air, this is what we call “Principal of sufficient reason.” It states that nothing is random, there is a reason for everything.

There are two statements regarding this

  1. This is not right
  2. Despite the fact that we know it is not right, we haven’t abandoned it.

We haven’t abandoned it, we’ve been using this language form ages. We know that the more fundamental laws governing the nature have been identified but we’ve stuck to this language.

There might be several reasons behind this, but one of the main reason is because it works very well with the regular domain of observing the things (might not work well at the fundamental level).

Lets Find Out Then

               Everyone might be aware of Wile E. Coyote and the road runner show. In almost every episode, this character runs off a cliff, but it doesn’t fall down. If doesn’t fall down until it looks down and realizes that it is off the cliff. This is what we call “The law of cartoon physics.”

language of physics used in cartoons; from the Road Runner show
(Image: Medium)

We are all such characters, we’re all such because we have adapted to the fundamental language of Aristotle i.e. every moving object has a mover. Physics says that Aristotle was wrong, but our conscience says that he was right. He was right because he had an accurate way of explaining the things i.e. “cause and effect” are accurate ways of explaining the things at our usual domain. We know that when we stop pushing the book, it stops moving and that’s true.

               But after many years, there was this person (Ibn Sina) who was not convinced with Aristotle’s natural motion idea. He said that this thing is true for a book kept on the table, but if you imagine it doing the same experiment in space where there is nothing else, this book would continue to move forever. Today we know his idea as “Conversation of Momentum”. He was the first person to introduce the idea of vacuum even though at that time we had no idea that vacuum can exist or not.

               Then after many years Galileo comes into picture. He made certain rules to explain the motion of objects. He says that if you have a recipe(the law governing the motion), initial position of the object and initial velocity and if you add friction, air resistance and other things afterwards to this recipe, you’ll have a much accurate way of explaining the phenomena.  

               This was a fundamentally new way of explaining the things. It says that there is no mover moving the object but there are equations, conservation laws that explain the phenomena of motion.

A new era; A new Journey

               Newton comes up with the theory of motion explaining the motion of everyday objects, if we study them we can come up with something called the “Conservation of information.” Laplace said that suppose we have two billiard balls and we are given with the information about the starting position and velocity, by solving the Newton’s equation, we can predict the future of the billiard balls i.e. the position and velocity at any instance in future.

But that is not all, Laplace said that if you give me the information at any moment in time, we can not only determine the behavior of future but also determine the past. We can know what could have happened in the past i.e. “information is equally stored in every moment of its existence.”

language of physics proposed by Newton; law of conservation of information

With this idea, we can come up with say a Demon called Laplace’s Demon which has the information about the position and velocity of every single object in the universe and it knows the laws governing them and given with the capacity to do the calculation, it can determine everything. Then, there is no difference between past and future for the demon, both are equally transparent. There has been long going debates about determinism and free will, but only the language has changed.

(Image: @ricard_sole (Twitter))

The changing Language

The language has been changed in the sense, instead of talking in terms of cause and effect; we now talk in terms of patterns. Let’s take an example of the set of integers.

{…, -3 , -2 , -1 , 0 , 1 , 2 , 3 , 4 , 5 ,..}

If we take a number say 4, we can tell that the number preceding is 5, because there is a pattern. We don’t say 4 is the cause of 5 and 5 is the effect of 4.

Laplace comes up with the idea of patterns, that the laws of governing everyday objects are just some patterns and differential equations. Even though we know all of this, we are yet to truly absorb the language of patterns.

We know that these things are not completely true because the things have came to a lot of change after the introduction of relativity and quantum mechanics, but the laws in the local domain is more or less the same. Even though we came up with a lot of changes in the fundamental or microscopic level, the things at the macroscopic level are more or less the same and it might be the same in future. We just have to adopt the new language of the laws governing the things.

To get familiar with more of such not so known physics theories; Click Here.

Ending Cancer With Diamonds ! New Step Towards Better World

It is hard to believe from the title of this article, that diamonds, one of the costliest gems in the world, are used for treatment. But yes, research has proven that diamond nanoparticles find application in modern biomedical science and technologies. With its most cutting edge properties, diamonds have proved to be a lot more than just a gem. And has potentially showed that ending cancer with diamonds, which seems fictional (yet genius), has now been made possible. Thanks to nanotechnology and macro brains of scientists. Let’s begin with little background of history.

This all began in the 1960s in the USSR where they produced nanodiamonds. However, they remained unaware of their properties until 1988. This is when they discovered its use in wear-resistant coatings and anti-wear additives for motor oil.

But the actual interest and production in nanodiamonds (ND) began, in the late 1990s, when they found its biomedical application.

For most of the current challenges faced in the field of medical therapeutics and biomedical applications, many platforms have been discovered. Among which, nanodiamonds have turned out to be promising carriers due to their excellent biocompatibility. They serve as a platform to be engaged with polymer base microfilm devices, which enables the slow release of the drug at the cancer site. Thus reserving a great amount of the drug and unique surface-mediated binding of a wide variety of bioactive molecules.

A nanodiamond is the smallest diamond but an authentic one!
(Image: Diacel Corporation)

Nanodiamonds range from 1 to 100 nm. They are characterized by chemical stability, octahedral symmetry, rigid structure, and large surface area. There are mainly two types of nanodiamonds used in medical applications: – detonation nanodiamonds and fluorescent nanodiamonds. But, the nanodiamonds produced by detonation are the most interesting ones as these nanodiamonds can be easily modified and linked with different biomolecules through hydrophobic interactions and covalent bonds.

You may be wondering, why to use diamonds despite their extortionate price? My response to this reasonable claim is, that the entire production process of nanodiamonds is economically sound. As they are synthesized by chemical vapor deposition (CVD), detonation, or high temperature & high-pressure methods. They are currently being widely used in cellular, preclinical, and clinical studies, especially in the field of cancer.

Early Detection Of Cancer – First step towards ending cancer with Diamonds

Recent research has shown that diamonds can help fight cancer by lighting up areas of cancer in MRI scans, especially in the case of brain and pancreatic cancers. Nanodiamonds penetrate through the cell wall without damaging them. The magnetic property of diamonds aligns the atoms in a way that generates a signal which is easily detected in the MRI scans. The hyperpolarized diamonds help in tracking the movement of molecules in the body.

Schematic illustration showing the different loading of different functional molecules on Nano diamonds - step towards ending cancer with diamonds
Schematic illustration showing the different loading
of different functional molecules on NDs
(Image: Theranostics)

Destruction To Chemo-Resistance Cancer Stem Cells

If you have never heard about the “devil” chemo-resistance; it is the ability of many cancer cells to escape from chemotherapy treatment. This is one of the primary reasons why cancer treatment fails. The major cancer cell which shows this behavior is a cancer stem cell, which initiates the formation of tumors in the body. This common resistance behavior of cancer cells has led to the evolution of various drugs and treatment methods.

A recent development in the field is Epirubicin, a chemotherapy drug for nanodiamonds. Attaching epirubicin to nanodiamonds not only effectively destroyed the chemo-resistant cancer stem cells but also prevented secondary tumor formation.

The non-reactive and non-toxic nature of nanodiamonds has proven extremely useful to patients who cannot tolerate the toxic effects of standard chemotherapy drugs. The versatility of nanodiamonds has opened up the realm of possibilities of using it in active targeting of the components. This potentially includes antibodies or peptides against tumor cell proteins for targeted drug release.

Role Of Diamonds In Ending Specific Cancers

Many approaches and researches have been in progress to find a way in curing and ending cancer with diamonds. Amongst them, major breakthroughs were noted in curing some of the most deadly cancers of all time.

Nanodiamonds For Brain Tumors

The development of the innovative idea of a drug delivery system using nanodiamonds has great results in killing the cancer cells with fewer side effects. Glioblastoma, the most common and lethal type of brain tumor can be treated with surgery, radiation, and chemotherapy; but the average life a person with glioblastoma is found to be just one and a half years.

This tumor is difficult to treat because the drugs injected are often unable to penetrate the system. That is; protecting blood vessels that surround the brain, commonly known as the ‘blood-brain barrier’. Even if some of the drugs get through the barrier, the effect of the drugs decreases, making them ineffective against the tumor cell.

After conducting several studies to overcome the barrier, researchers used the drug doxorubicin, a common and effective chemotherapy agent with nanodiamonds, and the substance created was known as ND-DOX.

It is been observed that the tumor cell ejects most of the anticancer drugs before the drugs come into action, but the drug ND-DOX levels in the tumor remain for a longer duration than standard doxorubicin without affecting the surrounding tissues. Further, it also found that ND-DOX increases apoptosis (programmed cancer cell death) and decreases the cell viability in the brain cells. The results of using ND-DOX exhibited reduced toxic effects and increased tumor-killing efficiency. The survival time of the patients also increased as compared to those given only doxorubicin.

Further research is in the process of using various other drugs with nanodiamonds and also improving the treatment and reducing the side effects, helping in ending cancer more effectively.

Nanodiamonds for breast cancer

The use of nanodiamonds for breast cancer in humans is still in its infancy stage. Scientists have experimented with the use of ND-DOX on mice with liver cancer and discovered that the level of doxorubicin is ten times higher with the increased survival rate in mice treated with the ND-DOX, as compared to mice given doxorubicin alone. After experimenting with liver cancer, the nanodiamonds were further tested for breast cancer. The results of the test were the same as above; showing fewer toxic effects and higher efficiency in destroying the tumor cells.

Schematic of drug delivery towards breast cancer - step towards ending cancer with diamonds
 A schematic of targeted drug delivery towards breast cancer is shown.
Nanodiamonds are encapsulated within liposomes
that are functionalized with targeting antibodies.
(Image: Phys.org, Dr. Laura Moore (Prof. Dean Ho Group))

The use of nanodiamonds for breast cancer in humans needs synthetic polymers that have reproducible properties and composition and can be tuned, which is still in the development stage. Nanodiamonds have been successful in treating liver cancer and mammary carcinoma models showing positive results in all. Research is still being conducted to use nanodiamonds in treating various lethal cancers and ending cancer with diamonds.

“It is very interesting to know the properties of diamonds used to save the lives of many people by ending cancer. Yet there is a lot more to be discovered as there are many unclosed facts and discoveries to be made in this field.”

Find out more about capability of diamonds and their amazing properties like this, here.

New Ultrafast Camera: Filming at 100 billion Frames per Second

3

Photography is a hobby for few, and necessity of many. Cameras and photography as a whole, have evolved with great intensity since past 3-4 decades. With its advancement in smart phones, cameras have taken huge leaps and reached new possibilities to capture things differently and in a more detailed way. One of the major factors, deciding the performance of camera is, frames per second (FPS). Higher the FPS rate, more the information will be processed. At the same time, it also improves how it is processed and at what speed it is processed.

Recently, in this quest to build ever-fast cameras and breaking records, California Institute of Technology (Caltech) showed some astonishing results. Caltech’s Lihong Wang has developed technology, termed as ultrafast camera, which can capture 3-D movies at 100 Billion frames per second.

This insane technology can reach blistering speeds of 70 trillion FPS, which is fast and smooth enough to see light travel. It is more like your cell phone camera, but producing flat images instead. Evidently, this new technology can capture ultrafast three-dimensional videos & may help to solve some scientific mysteries, unknown since years.

So, let us capture more (know more) about the technology and its amazing captures.

Note: While reading, don’t feel sleepy. We are capturing your every single movement at 100 Billion FPS.

Knowing the Technology in ultrafast way:

Let me quickly explain what is frames per second (FPS), and what is its importance here. Basically, it is the rate of frames (or consecutive images) appearing on display. So, if consider 30 fps. Then, 30 frames in a single second of video will appear. Same way it goes with 60 fps, 120 fps (and indeed for 100 billion fps too). Higher the fps rate, smoother the video will be. Interestingly, human vision has only 10-12 fps capturing rate, and beyond it, everything is in motion.

Here, Wang has developed new technology, which he iterated as;

“single-shot stereo-polarimetric compressed ultrafast photography,” or  SP-CUP.

In CUP technology, all the frames of entire video are captured in one single action, without repeating the event. This makes the technology extremely quick. Also here, Wang added that, he has tried to bring out one extra thing, which actually makes the whole technology unique.

Professor Lihong Wang with 100 Billion FPS ultrafast camera
Professor Lihong Wang with 100 Billion FPS ultrafast camera (image: sen)

When we normally look around, see things, experience things in our surroundings, we perceive that some objects are closer & some are farther away. This is known as depth perception. It is possible, because of our two eyes, each of which observes their surroundings in slightly different angle. Lastly, brain combines the information from these both images into a single 3-D image.

The camera developed here, works in essentially the same way. SP-CUP has only one lens. But interestingly, it functions as two halves which provide two different views with an offset. So basically, these two channels mimic our eyes pretty well. Here, computer that runs SP-CUP does the part which our brain does with signals received from the eyes. It processes data obtained from these two channels, into one three-dimensional movie.

This is a "sonic boom" captured by this 100 B FPS ultrafast camera
This is a “sonic boom” captured by this 100 B FPS ultrafast camera
(GIF: GadgetZZ)

Thus here, Wang’s camera has gone one step further than just recording videos at insane fast speeds (Ya! Ya! 100 B FPS). Camera can now record videos in three dimensions, at the same incredible fast speed.

Super ability of Ultrafast Camera: Capturing the unwatchable

With its ability to see and capture things in three dimensions, SP-CUP has one incredible ability that no human has. To see the polarization of light waves.

Consider a string attached to two rigid-supports. If string is pulled upwards or downwards, string vibrates vertically. If plucked sideways, it vibrates horizontally. Interestingly, visible ordinary light vibrates in all directions. However, polarized light is modified in such a way, it vibrates only in one direction. This can also occur naturally. Yet, artificially it can be done by using polarizing filters.

Day-to-day life example of this are; LCD screens (pretty out-dated) or polarized sunglasses or camera lenses even. It is used to detect hidden stress in materials and 3-D configuration of molecules, at scientific level.

A three-dimensional video showing a pulse of laser light passing through a laser-scattering medium and bouncing off reflective surfaces.
A three-dimensional video showing a pulse of laser light passing through a laser-scattering medium and bouncing off reflective surfaces. (GIF: Caltech)

Combination of high-speed 3-D imagery, with all the polarization information obtained from the computer, makes it a very powerful tool. It can be applicable to wide range of scientific problems.

For starters, consider a phenomenon that we all have witnessed in our childhood. Everyone remember soap-bubble, right? When we tried to burst it with our tiny little fingers, it was soothing and we enjoyed it a lot. But scientifically, when bubbles collapse rapidly after their formation, they emit a burst of light. This is because, when it bursts rapidly, its interior reaches such high temperatures, that it emits light in the form of energy.

Similar phenomenon is visible when sound waves, subjected in water or liquid, create tiny bubbles. This is a physical phenomenon – Sonoluminescence (probably never thought this deep while bursting the bubble)

This all tantalizing phenomena occur very rapidly and mysteriously. We need such ultrafast camera which could help understand and process all this, which is considered one of that great mysteries in physics.

Synergizing the imaging technology

Talking about great mysteries of physics, simultaneous and efficient ultrafast recording of multiple photon tags was unachievable; until the ultrafast camera. This majorly can contribute to high dimensional optical imaging and characterization in various fields (quantum computing even). This optical imaging is indispensable to maximize and extract information carried by different photon tags. Ultrafast camera can be ubiquitously used in various fields like biomedicines, agriculture and electronics.

Till now, high-dimensional imaging involved capturing either a 1-D column or 2-D array or slice. This inherently suffers low measurement efficiency and lack of information. In past decades, different techniques were developed at a rapid rate, to overcome this. Yet, existing approaches could measure different combinations of photon tags. So, much Detailed study was possible and 3-D models were enabled, apart from 2-D spatial information.

But this ultrafast camera, took interest of many scientific communities. With its single-shot temporal imaging & high dimensional optical imaging this technology is par excellence. Its novel ability, to capture photon’s time of arrival, without repeating the measurement is also notable. This opens new paths in understanding underlying and unidentified mechanisms and phenomena in physics, chemistry and biology. Especially, events which are manifested non-repeatable or difficult-to-reproduce events, can be studied with precision and efficiency.

Capturing things at such high speed, make us realize, how fast the mother nature functions

& how slow we (as humans) are, determining it.

OSD

The Incredible Skepticism of the Legendary James Randi

2

James Randi, one of the top-tier magicians in the entertainment industry, MacArthur award-winning magician who turned his daunting shrewdness to scrutinize affirms of spoon bending, mind reading, fortunetelling, ghost whispering, water dowsing, faith healing, U.F.O. spotting and heterogeneity of baffle, bunco, chicaner, flimflam, flummery, humbuggery, mountebankery, pettifoggery and out-and-out quackslavery, died on Tuesday at his home in Plantation, Fla at the age of 92 due to age-related causes as per the announcement by the James Randi Educational Foundation.

Canadian born Randi earned public respect as one of the world’s premier skeptics matters from ghosts to UFOs. He was a promoter of healthy skepticism and rational thought. His performances influenced hordes of aficionados, including the likes of TV and stage illusionist Penn & Teller.

James Randi hated tricking people.

The Amazing Randi, he yanks off amazing escape acts and dexterities maneuvers faster than you could see- but it was all in service of proving that he wasn’t magical in any sense of the world. He hated tricking people so much he made a career out of debunking so-called psychics, faith healers, and fortune-tellers and all sorts.

At 17, he dropped out of school altogether. He became an escape artist. At 60, he retired from stage magic entirely. By then he had built a parallel career investigating claims of the paranormal, much as Houdini had done. In 2016, Mr Randi recalled that in Sunday school they started to read to me from the Bible. He interrupted and said, “Excuse me, how do you know that’s true? It sounds strange.”

Once at the age of 15 he saw a clergyman at the front was performing a trick where he professed to read the minds of people in the audience. People were there weeping real tears and getting very emotionally disturbed. They were actually believing that this man had supernatural powers. So he walked up on stage, interrupted the performance, and showed the audience the workings of the trick.

Clergyman’s wife called the police and he had to spend four hours in a cell. He made up his mind in that four hours that there would come a day that I would have the prestige, the knowledge, the platform on which to stand to denounce these people if they were fake. He never said that magic, faith healing, fortune-telling wasn’t real. But these peddlers weren’t doing any of these things. They were actually fooling people with claimed to have such superpowers.

One of his most famous targets was Uri Geller,

who claimed to be able to psychokinetic metal bending, or twisting spoons and forks, what brought Randi to world attention. Geller sued Randi numerous times. Randi later stands up for Geller that he didn’t arraign on fraudulence, but merely bespoke that bending cutlery could be performed by a conjurer, usually by pre-treating the utensil and using sleight-of-hand. He published a book, The Truth About Uri Geller in 1982. He has written numerous books on paranormal, pseudoscience and supernatural claims like Flim-Flam! (1980), The Faith Healers(1987), An Encyclopedia of Claims, Frauds and Hoaxes of the Occult and Supernatural(1995).

Uri Geller
The very famous Uri Geller, one of the most renowned rivals of Randi.
[Image: Uri Geller]

Book written by Randi
The book James Randi wrote, debunking Uri Geller’s tricks.
[Image: Amazon]

Randi always jogs the audience’s memory that his acts were based on tricks and not magic, and soon his attention debunking claims by others who claimed paranormal powers. In 1974, Randi pulled off a Guinness World Record for lying naked in a slab of ice for 43 minutes and 8 seconds. He has held another Guinness World Record for beating Harry Houdini’s time for being sealed in an underwater coffin, one hour and 44 minutes. Throughout the 60s and 70s Randi appeared on stage as an illusionist and escapologist.

He founded the James Randi Educational Foundation in 1996. Through this, he offered a prize of $1m to anyone who could demonstrate evidence of any such supernatural powers. This challenge was officially closed in 2015. A documentary film about Randi, An Honest Liar, was released that year.

As public interest with the paranormal increased in the 1970s, Randi, in company with sci-fi writer Isaac Asimov and astronomer Carl Sagan, co-founded Committee for Skeptical Inquiry to investigate claims of paranormal and promote scientific inquiry.

Houdini feat
Randi performing the same feat performed by the very famous Harry Houdini, the underwater coffin feat.
[Image: The New York Times Magazine]

In the scientific community, he remained commended figure to the end. Amid of honours, he had a minor planet named for him, Asteroid 3163 Randi, discovered in 1981. Randi did have a juncture plan for the hereafter. He told New Times in 2009, “I want to be cremated and I want my ashes blown in Uri Geller’s eyes.”

James Randi in the 1987 Fresh Air interview said, “I can’t do real magic. I think there’s more magic in the opening of a morning glory than anything I or any charlatans in history has ever done, is doing, or will ever do. “His campaign was against such pseudoscience and claimed faith healers who keep deluding people. He was the prince of reason and had a true skeptic mind.

quote
[Image: CBC]

Atomic Dhruvi

Multi-Billion Dollar Industry Is Here To Disrupt Your Privacy

Phase-A

Data, data, and data! This word is not merely a word but the heart of a Trillion-dollar industry.🤑 Wherever there is money involved, Laws are obligatory! That’s why the Government came up with Privacy Policy which are like the fundamental flexbox grids. Every company has to adhere to it while making designs and decisions which can affect your Privacy.

But why am I talking about this? Why does the title look like more of a warning? The industry which these specifically targeted questions point at is…..(drumroll! 🥁) Quantum Computing. Let’s dive right in to exploit a sinister typical Hollywood plot, shall we?

James Bond this a great day
[Source- Pinterest]

Devices of the Future or Doom of the User

While this may look like some James Bond Movie device, it’s a computer, to be more specific Quantum Computer. These devices are some of the most amazing and remarkable innovations that have the potential to change human history. But we would be naïve If we ignore the flip side of the coin!

Google's Sycamore Quantum computer with 53 qubits power; because its all about privacy
Quantum computer Sycamore build by google with 53 qubits power. [Source- phys.org]

With just the right mix of Quantum physics, Quantum maths, and some electricity, it can crack any of the encryption algorithms used today blowing our privacy into chunks. Don’t worry about safeguarding your data right away, there’s still some time to go! To understand the aforementioned warning we need to understand a bit about this Device. (Doomsday Machine!)

Understanding Quantum Computer Bit by Bit

Everything needs a carrier to carry something important. Just like a body needs hemoglobin to carry oxygen around, a classical computer needs bits to carry data. The size of data that a machine can process at a time determines its speed. So each bit can carry some information with them.

So what makes Quantum so ‘Special’? its principles mainly: Quantum superposition and Quantum entanglement! I will take you through a quick tour. (godspeed!) Superposition is the possibility of being in different states at the same time. Simplifying, a bit which represents 0 or 1 in the classical sense does not exist like that in the quantum sense. In the quantum realm, it exists in the form of 0-1 simultaneously called a “Qubit”. It is a perfectly balanced state between 0 and 1.

Thanos balancing knife in Infinity war
Thanos depicting balance of knife akin to state of qubit. [Source- steamcommunity]

Now imagine, both computers are solving a maze. The classical one will try every possible escape route one at a time. Contrarily, quantum one would try all the escape routes at the same time. That gives Quantum an advantage over classical. In normal computer 2 bits can be in a state of (00,01,10,11), with each bit occupying affixed identity. Similarly, qubits can also represent those states with exception of being all the possible states at the same time. So as we add more qubits to the system the power of computation increases exponentially.

Superposition in Qubit depicting different states at same time.
Superposition helps in Qubit occupying all the possible quantum states. [Source- towardsdatascience]

For the exponential boost, you need some sort of nitro. What is the nitro? It’s dubbed famously as “Spooky Action at a Distance.” Entanglement is here to untangle the problem. (Bad joke!! 😝) Just like every team needs some sort of communication, Entanglement provides the same to qubits. Entanglement is also necessary for other experiments like this- Blackhole one. Now to work properly, we need states which cannot be described as a product of two single-particle states.

|↑⟩1|↑⟩2+|↓⟩1|↓⟩2; This is known as Bell State.

Preventing Privacy from Piracy

There are 17 billion devices connected to the Internet today! All of them need some kind of protection, a shield from attacks! To prevent a third party from eavesdropping on your conversation, there are various encryption algorithms deployed for our safety.

To explain subtly, suppose you are talking to your friend and an unknown is standing beside him. If you want to convey something important, you would lower your voice, might even whisper close to your friend. So the third person can’t hear anything or even if he would, it would sound like some sort of gibberish. This is the most simple if not an effective example of encryption.

A famous and simple example of encryption.
The most simple and effective way of encryption is to watch for eavesdropper. [Source- IndianMT]

I won’t write about all the encryption methods here because breaking them into easily understandable parts is tough, as they are you know “Encrypted”! We will explore the RSA algorithm which Q.C specifically aims at breaking. RSA found its widespread use in public key encryption-decryption. This algorithm relies on using a public key and private key for encrypting and decrypting messages.

Decrypting the Working of RSA Algorithm

I’m gonna take you through the working, godspeed! But I know you will catch the gist of it. The algorithm works on principles of pure maths and logic. Assume that you are the recipient and someone is a sender. You will give each sender a virtual lock (public key provided by you) and only the key (private key) which unlocks it remains with you. The sender will use a virtual lock when sending anything confidential to you and you will be able to access it through the key. Easy, right!

Virtual key; RSA algorithm; key to privacy
Virtual or Private key generated by RSA algorithm. [Source- cloudfront]

Virtual lock in RSA system to protect piracy
Virtual or Public key generated through RSA algorithm. [Source- IoTpractitioner]

What are the virtual lock and virtual keys? The public key (virtual locks) is based on a number computed from very large two distinct prime numbers along with an auxiliary value. Be ready for some maths that will blow you out of water. Take any two prime numbers, say 5 and 17. (in practice these are very large!!) We assign p=5 and q=17 such that N=p*q=85.

Starting from 1 till N-1=84, we need numbers that do not share a factor with N or are Co-Prime. There is a total of 64 numbers. (Refer to this video here.) Practically it’s not possible to count all of them, so we have a method Φ=(p-1)*(q-1) which gives you all the Co-primes. Next, we compute something called “Carmichael’s Totient Function”. In this, we compute Λ(N)=lcm(p-1,q-1)=16 and is kept secret.

Now we choose ‘e’ such that 1<e< Λ(N) and ‘e’ is Co-prime with Λ(N) and N. In this case 1<e<16, let’s take e=7 where gcd(e, Λ(N))=1. Now we have our lock which is: (7,85). Just one more step, bear with me, we need to compute another factor ‘d’ such that (d*e)(mod N)=1. Skipping all the procedures we get d=23. So now we have our Private key: (23,85).

Working of RSA algorithm
RSA algorithm; encrypting and decrypting explained.

Seeing the Big Factor at Play for Privacy

Here comes the fun part!! (Hacker mode on!) The big question is how can we break this encryption? The process seems easy at first but it seems to get more and more tricky as it starts to unfold. The reason I mentioned RSA is that the whole diegesis revolves around two main characters: ‘p’ and ‘q’. These primes are kept secret because these are materials that make the lock and key.

 We know N is the product of these 2 primes so it can have only 4 factors namely 1, itself, p, and q. Somehow we can find the factors, boom, all the privacy blows to poof! How do you find such large factors? Not by brute force but a Lightsaber that can cut through any lock.

With integer factorization of N, anyone can break the encryption and privacy
With integer factorization of N, anyone can break the encryption into bits. [Source- hotelpartner]

We will discuss the Lightsaber in another article! Let me know in the comments your views on the Government’s privacy policy. Till then stay safe physically as well as digitally.

It is possible to invent a single machine which can be used to compute any computable sequence. And perhaps one day will be able to simulate a human’s Mind.

Alan Turing

Second part of this article is Published. Click Here to crack the RSA Algorithm.

Mind Or Soul? The Three Ultimate Questions To Answer

0

Those interested in reading the original Hindi version written by the author please do download from the link below:

[wpdm_package id=’4804′]

Just imagine, how an electron would know that he has got a unit electric charge? Think that you are in a chemistry lab, doing a special experiment. This is assigned to you by your lab-instructor, but you didn’t finish it. This doesn’t harm the chemistry as a whole, nor does it prove something right or wrong. I am saying this all because chemical knowledge does not depend on your activity, ability or belief. In the same way, all biological, cosmic and supernatural powers do not cease to exist by obeying or mistrusting them.

Your observance or disbelief towards them does not define their existence or end of existence.  This conformity may seem irritational to you, but my idea is to bring you an idea in simpler terms. Because, for you, what I am going to say next requires that you start imagining.

Let’s talk about Raman Maharshi. He has told that, when infinite thoughts are born together, the mind of an organism becomes powerless. But if an organism dissolves itself on one of those thoughts then the mind becomes stronger and bolder. It is called ‘concentration’ and concentrated mind is very strong.

Ramana Maharshi
Sri Ramana Maharshi
(with his beautiful thoughts and ideologies)
(image: Pinterest)

The religious & unrighteous mind

The mind does not see righteousness, iniquity, sin & virtue, good or bad, high or low. It only sees the thought in which you put the mind. The same thought becomes intense. Since mind is the master of all senses, they also behave in the same way the mind is focused.

the religious and unrighteous mind
(image: MPRNews)

When the mind remains outside the native site, i.e. while indulging it in any external thought, makes its nature fickle. But when we are absorbed in that thought, we feel accomplished, or what we call success in simpler words. Then our mind comes to its original place, which is core of heart form some moment. As a result of this action, that organism experiences special feeling, which we call ‘happiness’. That is why, controlling mind by meditation, or through samadhi, or yoga, binds mind to the heart. This is a way of attaining happiness.

The mind is not an object or a person. It is actually an expression of ‘soul’. The way in which soul wanders in this world, is displayed by the mind. The mind itself is guided by wisdom and intellect itself is guided by the soul.

Resemblance is the key to life

In the same way, when we abuse or say a sermon, there is no harm given to the organism  which means there is no harm to the body. Actually both those lines are some strings of sounds(Words) spoken using one’s tongue – teeth. All that is meaningful to a particular person or society or culture, they all decorate the characters in different ways.

Assuming this conformity, we will say that the soul has become like the body of speaker here, which does not matter with any of our actions. Evidently, the same kind of relation is followed by mind and soul, which is between dialect and the reader. Also, consider the language which speaker has used, like Buddhi (intelligence).

Let us prove a situation using this. Let me say this:

  1. Bifyom
  2. Mig

There is no meaning of both words or sounds (mind). For humans, it is probably up to us to give a high or low representation to any of the above-mentioned statements. Our body parts, used for speaking, such as teeth-gridle or tongue (soul) have not discriminated these words while pronouncing. But when language (intelligence) decides that above words will actually be called abusive and the later word is good for society. Then the knowledge of Dharma (the eternal and inherent nature of reality) and unrighteousness, high & low, good & bad is born.

The Truth inside mind or outside it

Before that, all circumstances seem to be even and comfortable. These two are different types of words of the same class. In the same way, there is soul, which does not distinguish living being nor does it affect what kind of word it is. It is our intellect, which decides which word is good and which is bad, and likewise, society decides what work is good to do and what is not.

Noticeably, society resorts to religion (Dharma), and Dharma is one that harms nobody. It is regarded in as a cosmic law underlying right behavior and social order. It may come that someone’s acceptance about the cosmic Law or right behavior may be against the belief of someone’s else to abide by these uniformly, in short , a well celebrated regulations of  one social group maybe get perceived taboo for another society.

Someone feels that he is stationary and all objects in the front of volley are moving and person amongst those moving objects feels that he is stationary and all other are moving. This is what we called relaxation and motion relativity. In the similar way, Dharma  and Adharma is relative.

The question now arises is, what is the thing and which will remain invariant for any social groups or amidst any ideas and belief? Answer is the Satya (Truth )

Truth.

It is what has happened, which will not change and is reality. It is considered essential for a balanced and harmonious existence in the universe. When truthfulness is present, the universe operates in the way that it should. Everything in the world depends on Satya to function correctly,  which will not change and is reality. That is truth, which neither the unrighteous person can deny, nor the religious person can discard. It is neither favorable for religious person, nor can he take right over it.

But by taking or knowing the truth, society and the intellectual class divide the deeds into religious and unrighteous deeds. For instance, it is religion to worship almighty, insulting parents is unrighteous, etc etc. Henceforth accepting the truth is ‘religion’ and rejection of it is ‘unrighteousness’.

The three ultimate questions

If it is true that human who dreams and this world is also indeed a sort of long-term dream, then I want to know that:

We say that the world is an illusion, why does God incarnate in it?

The entire purpose of human being was to reconcile it mind with the soul in heart, then why did mind create itself in the first place?

We know that, real happiness is attained only when mind is in its original place (hradyagruha). And only then true nature of this world is seen, and as result, this illusion disappears. Why was this kind of play created or scripted?

It could have happened that the mind of a fickle nature would have been given a slow pace and it was also given that, it should remain introverted and stay in heart’s cavity forever. Then, we would always stay away from Maya (Cosmic illusion). This did not happen. But instead, we got three stages. First; body which we live in or in which our sole reside. Second, when we live within our imagination or a dream. And lastly, body which is real and where we should strive to go and attain salvation.

Why were such conditions, or situations or perceptions, whatever, were created for living beings?

The self cannot be found in books. You have to find for Yourself in Yourself.

–SRI Ramana Maharshi

To read more articles on mind power and psychology; Click Here.