1. Log in now to remove adverts - no adverts at all to registered members!

The science behind RHCs liver thread

Discussion in 'Liverpool' started by Prince Knut, Apr 30, 2016.

  1. Red Hadron Collider

    Red Hadron Collider The Hammerhead

    Joined:
    Mar 2, 2011
    Messages:
    57,485
    Likes Received:
    9,843
    No mass
     
    #781
  2. Milk not bear jizz

    Milk not bear jizz Grasser-In-Chief

    Joined:
    Nov 12, 2013
    Messages:
    28,427
    Likes Received:
    10,110
    Si seňor, no mas.
     
    #782
  3. Milk not bear jizz

    Milk not bear jizz Grasser-In-Chief

    Joined:
    Nov 12, 2013
    Messages:
    28,427
    Likes Received:
    10,110

    Quantum entanglement, string theory, something, something, I guess. <laugh> I won't pretend to fully understand it at all.

    It's not the only "instant information" in the universe. Gravity is "instant too", as the sun moves we're drawn to where it is now, not where it was several seconds ago.
     
    #783
    Prince Knut likes this.
  4. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    Gravitational waves are not instant though.
     
    #784
  5. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    Just watching Apollo13 on Sky. as we're talking science. Must be the 8031th time! :biggrin: I met Jim Lovell - tell more people about that than the time I met McCartney. :emoticon-0111-blush
     
    #785
    moreinjuredthanowen and THE FOOL like this.
  6. O'l Gravy Leg.

    O'l Gravy Leg. Well-Known Member

    Joined:
    Oct 21, 2020
    Messages:
    1,959
    Likes Received:
    1,057
    Aggregate of studies shows it does have efficacy on early disease stage.

    Oxford study used massive doses 4x or more, killing 25% of patients (MANY TIMES THE DEATH RATE OF BAD INFECTIONS TO OLD AND SICK) Someone should go to jail for that, it was murder
    Lancet published completely fake study.
    WHO cancelled studies because of Lancet publication.


    There is no patent on the drug, fancy that.

    The same FDA approve drugs constantly that have INSANE risks, anyone ever seen US medication advertisements? <laugh>

    Often a drug over there has a patent end, the FDA starts telling people it's terribly dangerous
     
    #786
  7. organic red

    organic red Well-Known Member

    Joined:
    Dec 21, 2011
    Messages:
    27,857
    Likes Received:
    10,823
    The legal disclaimers go on and on..................and on...... <laugh>

    It's embarrassing, big pharma are criminals in suits :headbang:
     
    #787
  8. Milk not bear jizz

    Milk not bear jizz Grasser-In-Chief

    Joined:
    Nov 12, 2013
    Messages:
    28,427
    Likes Received:
    10,110

    All the damn time! <laugh>

    They started kinda cutesy because originally they couldnt say what the drug did. Rogaine in the beginning was the first one that heavily advertised but they never said what it was that Rogaine did (and this was before it was easy to google things).

    You'd see pictures of men swirling kids around and going to sports events saying "ask your doctor if rogaine is right for you" and not say what it was the medicine did. Back then wondered "what the fuxk is rogaine, why do they keep advertising this? Why would I want a medicine that made me swing kids around if it gives me side effects?"



    Now the ads are obnoxious. They always make a cute cartoony CGI representation of the disease and have people happy running in sunshine and have cute CGI virus monster crushed by box... Every single bloody medicine being advertised follows the same advert format... No creativity.

    Then there's the long list of side effects, may give you headaches, bellyaches, or cause you to turn into a flesh eating zombie... Etc.
     
    #788
  9. moreinjuredthanowen

    Joined:
    Jun 9, 2011
    Messages:
    115,704
    Likes Received:
    27,602
    but on the other hand the fda are on of the most respected agencies globally and fears as.everything is made public.

    your ads are as a result for 100 years of legislation that forces these snake oil products to reveal all the issues.

    its america and health care thats broken imo. not the fda. its the thin blue line between the ravaging capitalist machine and the American people.
     
    #789
    Prince Knut likes this.
  10. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840

    :emoticon-0148-yes:
     
    #790

  11. Red Hadron Collider

    Red Hadron Collider The Hammerhead

    Joined:
    Mar 2, 2011
    Messages:
    57,485
    Likes Received:
    9,843
    The wobbling orbit of a pulsar proves Einstein right, yet again
    New observations of ‘frame dragging’ help reveal details of the final days of a pair of stars
    please log in to view this image

    A white dwarf (illustrated, center) twists spacetime as it spins, forcing the orbit (pink) of a neighboring pulsar (illustrated with radio jets) to wobble.

    MARK MYERS/ARC CENTRE OF EXCELLENCE FOR GRAVITATIONAL WAVE DISCOVERY (OZGRAV)

    Share this:
    By Christopher Crockett

    JANUARY 30, 2020 AT 2:00 PM

    Chalk up yet another win for Einstein.

    A twist in the fabric of spacetime — predicted by the physicist’s theory of general relativity (SN: 10/7/15) — is causing the orbit of one stellar corpse to teeter around another stellar corpse, researchers report. And the relativistic corkscrew is helping astronomers reconstruct the final days of these two long-dead stars.

    According to general relativity, any spinning mass drags spacetime around with it, like a hand mixer in molasses. One way to see this “frame dragging” is to keep a careful eye on anything circling the spinning object on a tilted orbit — the spacetime maelstrom will make the orbit wobble, or precess.

    For the last 20 years, researchers have been using radio telescopes to track the motion of a pulsar, the dense remains of a massive star that went supernova, as it orbits a spinning white dwarf, the core of a lighter star that died less violently. The pulsar, dubbed PSR J1141–6545, emits a steady beat of radio waves as it spins, and by recording the arrival times of those pulses, researchers can tell when the pulsar is moving toward and away from Earth.

    please log in to view this image

    Sign Up For the Latest from Science News
    Headlines and summaries of the latest Science News articles, delivered to your inbox

    E-mail*GO

    Over those two decades, the orbit of the pulsar has been slowly precessing, astronomers report. The precession isn’t much — the orbit’s tilt drifts by just 0.0004 degrees per year. But it matches what researchers expect if the neighboring white dwarf whips up spacetime as it spins. Vivek Venkatraman Krishnan, an astrophysicist at the Max Planck Institute for Radio Astronomy in Bonn, Germany, and colleagues report the results in the Jan. 31 Science.

    This finding isn’t the first time that researchers have observed frame dragging. Satellites in Earth’s orbit have captured the relatively puny effect around our planet (SN: 11/24/15). And astronomers also have observed fluctuations in the frequency of X-ray light coming from a black hole, where frame dragging should be quite intense, suggesting that gas may be precessing around it (SN: 12/17/15).

    The new observation “is much more direct than mine,” says Adam Ingram, an astrophysicist at the University of Oxford who studied the black hole. “I can only infer that something is precessing in black hole systems, whereas the precision radio observations presented here leave little room for ambiguity.”

    The pulsar precession helps researchers piece together the final moments in the lives of both stars. Relativistic wobbling occurs only if the orbit of the pulsar and the spin of the white dwarf are misaligned, something which is usually smoothed over by an exchange of mass between the dying stars. “This immediately tells us that the orbit was tilted due to the supernova explosion that produced the pulsar,” Venkatraman Krishnan says.

    Normally, the supernova would go off and then the progenitor of the white dwarf would dump gas on the pulsar after the explosion, aligning spin to orbit. But in this case, the opposite happened: The pulsar’s progenitor dumped gas on the white dwarf and then the supernova occurred.
     
    #791
    Prince Knut likes this.
  12. Red Hadron Collider

    Red Hadron Collider The Hammerhead

    Joined:
    Mar 2, 2011
    Messages:
    57,485
    Likes Received:
    9,843
    Scientists cooled a nanoparticle to the quantum limit
    The particle’s motion reached the lowest level allowed by the Heisenberg uncertainty principle
    please log in to view this image

    Scientists cooled a nanoparticle in a specially designed cavity (shown), reaching the lowest temperature permitted by quantum mechanics. Laser light scattering off of the nanoparticle appears as a dot in the center of this artificially colored image.

    KAHAN DARE, LORENZO MAGRINI AND YURIY COROLI/UNIVERSITY OF VIENNA

    Share this:
    By Emily Conover

    JANUARY 30, 2020 AT 2:00 PM

    A tiny nanoparticle has been chilled to the max.

    Physicists cooled a nanoparticle to the lowest temperature allowed by quantum mechanics. The particle’s motion reached what’s known as the ground state, or lowest possible energy level.

    In a typical material, the amount that its atoms jostle around indicates its temperature. But in the case of the nanoparticle, scientists can define an effective temperature based on the motion of the entire nanoparticle, which is made up of about 100 million atoms. That temperature reached twelve-millionths of a kelvin, scientists report January 30 in Science.

    Levitating it with a laser inside of a specially designed cavity, Markus Aspelmeyer of the University of Vienna and colleagues reduced the nanoparticle’s motion to the ground state, a minimum level set by the Heisenberg uncertainty principle, which states that there’s a limit to how well you can simultaneously know the position and momentum of an object.

    While quantum mechanics is unmistakable in tiny atoms and electrons, its effects are harder to observe on larger scales. To better understand the theory, physicists have previously isolated its effects in other solid objects, such as vibrating membranes or beams (SN: 4/25/18). But nanoparticles have the advantage that they can be levitated and precisely controlled with lasers.

    Eventually, Aspelmeyer and colleagues aim to use cooled nanoparticles to study how gravity behaves for quantum objects, a poorly understood realm of physics. “This is the really long-term dream,” he says.
     
    #792
    Prince Knut likes this.
  13. O'l Gravy Leg.

    O'l Gravy Leg. Well-Known Member

    Joined:
    Oct 21, 2020
    Messages:
    1,959
    Likes Received:
    1,057
    You never read good environment news these days so here's some to start off 2021.
    Less summer sea ice increasing phytoplankton, massive blooms (that algae that provides most of the O2 we breathe and much marine food, more fish, eaten by seals and walrus who are eaten in turn by polar bears)


    Arctic report card 2020 highlights the huge benefit of less summer sea ice: more food
    Posted on January 7, 2021 | Comments Offon Arctic report card 2020 highlights the huge benefit of less summer sea ice: more food
    As well as summarizing sea ice changes, NOAA’s 2020 Arctic Report Card features two reports that document the biggest advantage of much less summer sea ice than there was before 2003: increased primary productivity. Being at the top of the Arctic food chain, polar bears have been beneficiaries of this phenomenon because the Arctic marine mammals they depend on for food – seals, walrus and bowhead whales – have been thriving despite less ice in summer.

    please log in to view this image


    In the sea ice chapter (Perovich et al. 2020), my favourite of all the figures published is the graph of September vs. March sea ice (above). As you can see, March ice extent has been virtually flat (no declining trend) since 2004. And as the graph below shows, September extent has been without a trend since 2007, as NSIDC ice expert Walt Meier demonstrated last year (see below): it doesn’t take much imagination to see that the value for 2020 from the graph above (the second-lowest after 2012) hasn’t changed the flat-trend line.

    please log in to view this image


    The chapter also included this comparison of March vs. September ice charts:

    please log in to view this image
    However, the real eye-opener in the report is the admission that much less summer ice has benefitted the entire Arctic food change because of increased primary productivity.

    BENEFITS OF LESS SUMMAR ICE
    One of the highlights emphasized in the report is this gem:

    During July and August 2020, regional ocean primary productivity in the Laptev Sea was ~2 times higher for July and ~6 times higher for August compared to their respective monthly averages.

    It’s nice to see an acknowledgement that the longer ice-free seasons we’ve experienced since 2007 have an up-side. In fact, less summer ice has been a net benefit to most animals in the Arctic and peripheral seas because less ice and more sunlight in most areas increases ‘primary productivity’. Primary productivity refers to phytoplankton, those single-celled plants that are the basis for life in the ocean because they turn sunlight into stored energy: sunlight is their food. Longer ice-free seasons – featuring less ice and more sunlight – provide the conditions phytoplankton need to grow exponentially, producing ‘blooms’ that can be seen by satellites. One such bloom is shown below, in the Barents Sea on 26 July 2020 (NOAA photo).

    please log in to view this image


    In the table presented by one set of NOAA authors (Frey et al. 2020) below, primary productivity has been up since 2003, not including data from 2020, in virtually all regions of the Arctic:

    please log in to view this image


    Similar results are discussed in a government report from Canada (Coupel et al. 2019), which summarizes reports of recent phytoplankton increases across the Arctic. I’ve revised a graphic I posted earlier this year to show how this works (below). More primary productivity due to a longer ice-free period benefits the entire Arctic food chain in which polar bears hold top spot: fatter seals because of more food mean fatter polar bears with improved survival.

    please log in to view this image


    Abundant phytoplankton → more food for single-celled animals (zooplankton), fish, and bottom-dwelling invertebrates like clams → prolific reproduction of krill, fish and clams (population increases); more fish, in turn → fatter ringed and bearded seals, who feed primarily in the ice-free season, and thus fat female seals → more fat pups the following spring (Crawford et al. 2015) for polar bears when they need it most. Unless other factors come into play that reduce prey availability, like too much ice, or snow over ice in spring (Crockford 2015, 2017), polar bears will tend to be fatter, healthier, and reproduce more successfully, resulting in at least stable, if not growing population numbers – such as we’ve seen in the Barents and Chukchi Seas, and in the Gulf of Boothia, since 2005 (Aars 2018; Dyck et al. 2020; Lippold et al. 2019; Regehr et al. 2018; Rode et al. 2014, 2018).

    As well as ringed seals, bearded seals and polar bears, bowhead whales (shown below) have also benefitted from this increased primary productivity, as explained in another chapter in the Arctic Report Card by John George and colleages.

    please log in to view this image


    These authors (George et al. 2020) stated in their chapter about bowhead whales (see range map from their paper below):

    The population size of bowheads in the Pacific Arctic has increased in the past 30 years likely due to increases in ocean primary production as well as the northward transport of the zooplankton on which they feed.

    please log in to view this image


    In addition, I suspect that the large recent population size and health of Pacific walrus are indicators they are another species that has been the beneficiary of less summer sea ice (Crockford 2014a,b). Walrus feed on bottow-dwelling invertebrate creatures whose population sizes would be boosted by more abundant plankton, allowing more walrus to forage without running out of food, as they have been known to do in the past during so-called ‘boom and bust’ population cycles (Fischbach et al. 2016; Lowry 1985; MacCracken et al. 2017).

    REFERENCES
    Aars, J. 2018. Population changes in polar bears: protected, but quickly losing habitat. Fram Forum Newsletter 2018. Fram Centre, Tromso. Download pdf here (32 mb).

    Coupel, P., Michel, C. and Devred, E. 2019. Case study: The Ocean in Bloom. In State of Canada’s Arctic Seas, Niemi, A., Ferguson, S., Hedges, K., Melling, H., Michel, C., et al. 2019. Canadian Technical Report Fisheries and Aquatic Sciences 3344.

    Crawford, J.A., Quakenbush, L.T. and Citta, J.J. 2015. A comparison of ringed and bearded seal diet, condition and productivity between historical (1975–1984) and recent (2003–2012) periods in the Alaskan Bering and Chukchi seas. Progress in Oceanography 136:133-150.

    Crockford, S. J. 2014a. On the beach: walrus haulouts are nothing new. Global Warming Policy Foundation Briefing Paper 11. Pdf here.

    Crockford, S. J. 2014b.The walrus fuss: walrus haulouts are nothing new http://www.thegwpf.org/gwpftv/?tubepress_item=cwaAwsS2OOY&tubepress_page=2

    Crockford, S.J. 2015. The Arctic Fallacy: Sea Ice Stability and the Polar Bear. Global Warming Policy Foundation Briefing Paper 16. London. Available at http://www.thegwpf.org/susan-crockford-the-arctic-fallacy-2/

    Crockford, S.J. 2017. Testing the hypothesis that routine sea ice coverage of 3-5 mkm2 results in a greater than 30% decline in population size of polar bears (Ursus maritimus). PeerJ Preprints 19 January 2017. Doi: 10.7287/peerj.preprints.2737v1 Open access. https://peerj.com/preprints/2737/

    Dyck, M., Regehr, E.V. and Ware, J.V. 2020. Assessment of Abundance for the Gulf of Boothia Polar Bear Subpopulation Using Genetic Mark-Recapture. Final Report, Government of Nunavut, Department of Environment, Iglulik. 12 June 2020. Pdf here.

    Fischbach, A.S., Kochnev, A.A., Garlich-Miller, J.L., and Jay, C.V. 2016. Pacific walrus coastal haulout database, 1852–2016—Background report: U.S. Geological Survey Open-File Report 2016–1108. http://dx.doi.org/10.3133/ofr20161108. The online database is found here.

    Frey, K.E., Comiso, J.C., Cooper, L.W., Grebmeier, J.M. and Stock, L.V. 2020. Arctic Ocean primiary productivity: the response of marine algae to climate warming and sea ice decline. 2020 Arctic Report Card. NOAA. DOI: 10.25923/vtdn-2198 https://arctic.noaa.gov/Report-Card...-Algae-to-Climate-Warming-and-Sea-Ice-Decline

    George, J.C., Moore, S.E. and Thewissen, J.G.M. 2020. Bowhead whales: recent insights into their biology, status, and resilience. 2020 Arctic Report Card, NOAA. DOI: 10.25923/cppm-n265 https://arctic.noaa.gov/Report-Card/Report-Card-2020/ArtMID/7975/ArticleID/905/Bowhead-Whales-Recent-Insights-into-Their-Biology-Status-and-Resilience

    Lippold, A., Bourgeon, S., Aars, J., Andersen, M., Polder, A., Lyche, J.L., Bytingsvik, J., Jenssen, B.M., Derocher, A.E., Welker, J.M. and Routti, H. 2019. Temporal trends of persistent organic pollutants in Barents Sea polar bears (Ursus maritimus) in relation to changes in feeding habits and body condition. Environmental Science and Technology 53(2):984-995.

    Lowry, L. 1985. “Pacific Walrus – Boom or Bust?” Alaska Fish & Game Magazine July/August: 2-5. pdf here.

    MacCracken, J.G., Beatty, W.S., Garlich-Miller, J.L., Kissling, M.L and Snyder, J.A. 2017. Final Species Status Assessment for the Pacific Walrus (Odobenus rosmarus divergens), May 2017 (Version 1.0). US Fish & Wildlife Service, Anchorage, AK. Pdf here (8.6 mb).

    Perovich, D., Meier, W., Tschudi, M., Hendricks, S., Petty, A.A., Divine, D., Farrell, S., Gerland, S., Haas, C., Kaleschke, L., Pavlova, O., Ricker, R., Tian-Kunze, X., Webster, M. and Wood, K. 2020. Sea ice. 2020 Arctic Report Card, NOAA. https://arctic.noaa.gov/Report-Card/Report-Card-2020/ArtMID/7975/ArticleID/891/Sea-Ice Pdf of entire Arctic Report Card here (12mb).

    Regehr, E.V., Hostetter, N.J., Wilson, R.R., Rode, K.D., St. Martin, M., Converse, S.J. 2018. Integrated population modeling provides the first empirical estimates of vital rates and abundance for polar bears in the Chukchi Sea. Scientific Reports 8 (1) DOI: 10.1038/s41598-018-34824-7 https://www.nature.com/articles/s41598-018-34824-7

    Rode, K.D., Regehr, E.V., Douglas, D., Durner, G., Derocher, A.E., Thiemann, G.W., and Budge, S. 2014. Variation in the response of an Arctic top predator experiencing habitat loss: feeding and reproductive ecology of two polar bear populations. Global Change Biology 20(1):76-88. http://onlinelibrary.wiley.com/doi/10.1111/gcb.12339/abstract

    Rode, K. D., R. R. Wilson, D. C. Douglas, V. Muhlenbruch, T.C. Atwood, E. V. Regehr, E.S. Richardson, N.W. Pilfold, A.E. Derocher, G.M Durner, I. Stirling, S.C. Amstrup, M. S. Martin, A.M. Pagano, and K. Simac. 2018. Spring fasting behavior in a marine apex predator provides an index of ecosystem productivity. Global Change Biology http://onlinelibrary.wiley.com/doi/10.1111/gcb.13933/full


    https://polarbearscience.com/2021/0...uge-benefit-of-less-summer-sea-ice-more-food/
     
    #793
  14. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    Godspeed Micheal Collins. The astronaut's astronaut - competent, a fine pilot, and a wonderful engineer. Duty done, mission successful. <applause>
     
    #794
  15. Red Hadron Collider

    Red Hadron Collider The Hammerhead

    Joined:
    Mar 2, 2011
    Messages:
    57,485
    Likes Received:
    9,843
    Didn't know he'd gone. RIP.
     
    #795
    Prince Knut likes this.
  16. Red Hadron Collider

    Red Hadron Collider The Hammerhead

    Joined:
    Mar 2, 2011
    Messages:
    57,485
    Likes Received:
    9,843
    COVID-19 can affect the brain. New clues hint at how
    Researchers are sifting through symptoms to figure out what the virus does to the brain
    please log in to view this image

    COVID-19 can come with brain-related problems, but just how the virus exerts its effects isn’t clear.

    ROXANA WEGNER/MOMENT/GETTY IMAGES

    Share this:
    By Laura Sanders

    APRIL 27, 2021 AT 6:00 AM

    For more than a year now, scientists have been racing to understand how the mysterious new virus that causes COVID-19 damages not only our bodies, but also our brains.

    Early in the pandemic, some infected people noticed a curious symptom: the loss of smell. Reports of other brain-related symptoms followed: headaches, confusion, hallucinations and delirium. Some infections were accompanied by depression, anxiety and sleep problems.

    Recent studies suggest that leaky blood vessels and inflammation are somehow involved in these symptoms. But many basic questions remain unanswered about the virus, which has infected more than 145 million people worldwide. Researchers are still trying to figure out how many people experience these psychiatric or neurological problems, who is most at risk, and how long such symptoms might last. And details remain unclear about how the pandemic-causing virus, called SARS-CoV-2, exerts its effects.

    Sign up for e-mail updates on the latest coronavirus news and research[/paste:font]
    “We still haven’t established what this virus does in the brain,” says Elyse Singer, a neurologist at the University of California, Los Angeles. There are probably many answers, she says. “It’s going to take us years to tease this apart.”

    Getting the numbers
    For now, some scientists are focusing on the basics, including how many people experience these sorts of brain-related problems after COVID-19.

    A recent study of electronic health records reported an alarming answer: In the six months after an infection, one in three people had experienced a psychiatric or neurological diagnosis. That result, published April 6 in Lancet Psychiatry, came from the health records of more than 236,000 COVID-19 survivors. Researchers counted diagnoses of 14 disorders, ranging from mental illnesses such as anxiety or depression to neurological events such as strokes or brain bleeds, in the six months after COVID-19 infection.

    “We didn’t expect it to be such a high number,” says study coauthor Maxime Taquet of the University of Oxford in England. One in three “might sound scary,” he says. But it’s not clear whether the virus itself causes these disorders directly.

    The vast majority of those diagnoses were depression and anxiety, “disorders that are extremely common in the general population already,” points out Jonathan Rogers, a psychiatrist at University College London. What’s more, depression and anxiety are on the rise among everyone during the pandemic, not just people infected with the virus.

    Mental health disorders are “extremely important things to address,” says Allison Navis, a neurologist at the post-COVID clinic at Icahn School of Medicine at Mount Sinai in New York City. “But they’re very different than a stroke or dementia,” she says.

    About 1 in 50 people with COVID-19 had a stroke, Taquet and colleagues found. Among people with severe infections that came with delirium or other altered mental states, though, the incidence was much higher — 1 in 11 had strokes.

    please log in to view this image

    Serious neurological damage, such as these strokes caused by blocked blood vessels, turn up in people with COVID-19.K. THAKUR ET AL/BRAIN 2021
    Taquet’s study comes with caveats. It was a look back at diagnosis codes, often entered by hurried clinicians. Those aren’t always reliable. And the study finds a relationship, but can’t conclude that COVID-19 caused any of the diagnoses. Still, the results hint at how COVID-19 affects the brain.

    Blood vessels scrutinized
    Early on in the pandemic, the loss of smell suggested that the virus might be able to attack nerve cells directly. Perhaps SARS-CoV-2 could breach the skull by climbing along the olfactory nerve, which carries smells from the nose directly to the brain, some researchers thought.

    That frightening scenario doesn’t seem to happen much. Most studies so far have failed to turn up much virus in the brain, if any, says Avindra Nath, a neurologist who studies central nervous system infections at the National Institutes of Health in Bethesda, Md. Nath and his colleagues expected to see signs of the virus in brains of people with COVID-19 but didn’t find it. “I kept telling our folks, ‘Let’s go look again,’” Nath says.

    That absence suggests that the virus is affecting the brain in other ways, possibly involving blood vessels. So Nath and his team scanned blood vessels in post-mortem brains of people who had been infected with the virus with an MRI machine so powerful that it’s not approved for clinical use in living people. “We were able to look at the blood vessels in a way that nobody could,” he says.

    Damage abounded, the team reported February 4 in the New England Journal of Medicine. Small clots sat in blood vessels. The walls of some vessels were unusually thick and inflamed. And blood was leaking out of the vessels into the surrounding brain tissue. “You can see all three things happening at the same time,” Nath says.

    Those results suggest that clots, inflamed linings and leaks in the barriers that normally keep blood and other harmful substances out of the brain may all contribute to COVID-related brain damage.

    please log in to view this image

    Signs of damage in the brains of people with COVID-19 involve inflammation, including these immune cells around a blood vessel (left), and changes in cells (right) that might have resulted from low oxygen.J. LOU ET AL/FREE NEUROPATHOLOGY 2021
    But several unknowns prevent any definite conclusions about how these damaged blood vessels relate to people’s symptoms or outcomes. There’s not much clinical information available about the people in Nath’s study. Some likely died from causes other than COVID-19, and no one knows how the virus would have affected them had they not died.

    Inflamed body and brain
    Inflammation in the body can cause trouble in the brain, too, says Maura Boldrini, a psychiatrist at Columbia University in New York. Inflammatory signals released after injury can change the way the brain makes and uses chemical signaling molecules, called neurotransmitters, that help nerve cells communicate. Key communication molecules such as serotonin, norepinephrine and dopamine can get scrambled when there’s lots of inflammation.

    Neural messages can get interrupted in people who suffer traumatic brain injuries, for example; researchers have found a relationship between inflammation and mental illness in football players and others who experienced hits to the head.

    Similar evidence comes from people with depression, says Emily Troyer, a psychiatrist at the University of California, San Diego. Some people with depression have high levels of inflammation, studies have found. “We don’t actually know that that’s going on in COVID,” she cautions. “We just know that COVID causes inflammation, and inflammation has the potential to disrupt neurotransmission, particularly in the case of depression.”

    See all our coverage of the coronavirus outbreak[/paste:font]
    Among the cells that release inflammatory proteins in the brain are microglia, the brain’s version of the body’s disease-fighting immune system. Microglia may also be involved in the brain’s response to COVID-19. Microglia primed for action were found in about 43 percent of 184 COVID-19 patients, Singer and others reported in a review published February 4 in Free Neuropathology. Similar results come from a series of autopsies of COVID-19 patients’ brains; 34 of 41 brains contained activated microglia, researchers from Columbia University Irving Medical Center and New York Presbyterian Hospital reported April 15 in Brain.

    With these findings, it’s not clear that SARS-CoV-2 affects people’s brains differently from other viruses, says Navis. In her post–COVID-19 clinic at Mount Sinai, she sees patients with fatigue, headaches, numbness and dizziness — symptoms that are known to follow other viral infections, too. “I’m hesitant to say this is unique to COVID,” Navis says. “We’re just not used to seeing so many people getting one specific infection, or knowing what the viral infection is.”

    Teasing apart all the ways the brain can suffer amid this pandemic, and how that affects any given person, is impossible. Depression and anxiety are on the rise, surveys suggest. That rise might be especially sharp in people who endured stressful diagnoses, illnesses and isolation.

    please log in to view this image

    In a postmortem brain from a person with COVID-19, a clotting protein called fibrinogen (red) indicates that the blood vessels are damaged and leaky.AVINDRA NATH
    Just being in an intensive care unit can lead to confusion. Delirium affected 606 of 821 people — 74 percent — while patients were in intensive care units for respiratory failure and other serious emergencies, a 2013 study found. Post-traumatic stress disorder afflicted about a third of people who had been seriously sick with COVID-19 (SN: 3/12/21).

    More specific aspects of treatment matter too. COVID-19 patients who spent long periods of time on their stomachs might have lingering nerve pain, not because the virus attacked the nerve, but because the prone position compressed the nerves. And people might feel mentally fuzzy, not because of the virus itself, but because a shortage of the anesthetic drug, propofol, meant they received an alternative sedative that can bring more aftereffects, says Rogers, the psychiatrist at University College London.

    Lingering questions — what the virus actually does to the brain, who will suffer the most, and for how long — are still unanswered, and probably won’t be for a long time. The varied and damaging effects of lockdowns, the imprecision doctors and patients use for describing symptoms (such as the nonmedical term “brain fog”) and the indirect effects the virus can have on the brain all merge, creating a devilishly complex puzzle.

    For now, doctors are busy focusing on ways in which they can help, even amid these mysteries, and designing larger, longer studies to better understand the effects of the virus on the brain. That information will be key to helping people move forward. “This isn’t going to be over soon, unfortunately,” Troyer says.
     
    #796
    Prince Knut likes this.
  17. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    Quantum computers are revealing an unexpected new theory of reality
    A powerful new idea about how the laws of physics work could bring breakthroughs on everything from quantum gravity to consciousness, says researcher Chiara Marletto

    PHYSICS 14 April 2021
    By Chiara Marletto

    please log in to view this image

    Manshen Lo

    QUANTUM supremacy” is a phrase that has been in the news a lot lately. Several labs worldwide have already claimed to have reached this milestone, at which computers exploiting the wondrous features of the quantum world solve a problem faster than a conventional classical computer feasibly could. Although we aren’t quite there yet, a general-purpose “universal” quantum computer seems closer than ever – a revolutionary development for how we communicate and encrypt data, for virtual reality, artificial intelligence and much more.

    These prospects excite me as a theoretical physicist too, but my colleagues and I are captivated by an even bigger picture. The quantum theory of computation originated as a way to deepen our understanding of quantum theory, our fundamental theory of physical reality. By applying the principles we have learned more broadly, we think we are beginning to see the outline of a radical new way to construct laws of nature.

    It means abandoning the idea of physics as the science of what’s actually happening, and embracing it as the science of what might or might not happen. This “science of can and can’t” could help us tackle some of the big questions that conventional physics has tried and failed to get to grips with, from delivering an exact, unifying theory of thermodynamics and information to getting round conceptual barriers that stop us merging quantum theory with general relativity, Einstein’s theory of gravity. It might go even further and help us to understand how intelligent thought works, and kick-start a technological revolution that would make quantum supremacy look modest by comparison.





    Since the dawn of modern physics in Galileo Galilei and Isaac Newton’s times, physics has progressed using broadly the same approach. At its core are exact laws of motion: equations that describe how a system evolves in space and time from a given set of initial conditions. Think Newton’s laws of motion describing billiard balls on a table, or his universal law of gravitation explaining how apples fall to the ground and Earth moves around the sun.

    “Physical laws are our manual to the universe, and the best laws are exact”

    The word “exact” is important here. If you were to buy a device such as a washing machine, a manual stating how to use it approximately, plus or minus some error, would be pretty useless. Physical laws are our tentative manual to the universe, and the best laws are exact ones, too: they are easier to test and discard when they clash with evidence.

    At least initially, quantum theory changed nothing about this traditional approach. At the heart of the theory when it was first formulated in the 1920s is an exact equation of motion, the Schrödinger equation, which determines how quantum systems evolve. The big difference from the classical world is that this equation tells us that quantum objects obey Heisenberg’s uncertainty principle. This states that certain quantum properties are incompatible, meaning they can’t be measured simultaneously to arbitrarily high accuracy: if you have one property perfectly focused, you must lose sight of the other. Position and velocity are one such pair, so if you have an electron’s position, say, perfectly in focus, it must be in a quantum “superposition” of all its possible velocities. The values of an electron’s quantum-mechanical spin along two different axes are another incompatible pair.

    Examining the nature of the uncertainty principle back in the 1980s led quantum computing pioneer David Deutsch to a radical insight. The best way to think about an electron in a certain spin state, for example, is as a “qubit” – an entity that can instantiate one bit of information in multiple ways that can’t all be sharp, or in focus, at the same time. What’s important about this qubit – the essence of its quantumness – isn’t the trajectories it follows in space and time, but the transformations you can and can’t perform on it. For instance, you can’t copy all the information reliably from a single qubit, but all that information about its incompatible properties exists, and can be used to perform quantum computations.

    Actual to counterfactual
    These rules of “can” and “can’t” surrounding qubits and their incompatible variables make them much more powerful than classical bits, and underlie the promise of quantum computers and quantum supremacy. More fundamentally, however, they tell us that, rather than always focusing on what happens (the actual), you can lay the foundations of a physical theory on what could or couldn’t be (the counterfactual), and explain the actual in terms of the counterfactual.

    Now comes the daring leap: what if these “can and can’t” properties were key to the whole of physics? Instead of starting from initial conditions and exact dynamical laws, you might express physics in terms of laws of possible and impossible transformations, and derive other laws of motion from these.

    This counterfactual approach isn’t an entirely new mode of thinking in physics. The first and second laws of thermodynamics, as conceived in the 19th century, set powerful counterfactual constraints. You can, for example, construct a “heat engine” that converts heat to useful work, but you can’t convert heat completely into useful work, or create energy out of nothing.

    Thermodynamics is a formidable tool: its principles allow us to make predictions about systems with large numbers of particles, for instance, whose dynamical laws are intractable. Generalising this logic, the science of can and can’t allows us to formulate new principles and improve on existing ones (see “A new thermodynamics“) – and, perhaps surprisingly, express more phenomena in terms of exact physical laws.

    please log in to view this image

    Quantum computers could inspire new theories of physics

    IBM Research/Science Photo Library

    Information is a crucial example. What physical property makes a computer bit capable of containing information? Not that it is in a particular state, 0 or 1, but that you can, once it has been set to 0, set it to 1, and vice versa – and also that you can copy its value to another physical system, if it too is made of bits. These properties are counterfactuals that the traditional physical approach of explaining everything with dynamical laws struggles to handle.

    The science of can and can’t allows us to express exact physical laws capturing the regularities that allow bits to exist in the universe. What’s more, these laws explain classical bits – the state of a traffic light or a neuron in the brain – just as well as qubits. You don’t need to worry about underlying laws of motion, whether quantum or classical or anything else. Far from being irreconcilable, quantum and classical information are unified by general overarching principles about how you can and can’t manipulate it.

    That bodes well for making progress on merging quantum theory and general relativity. Notoriously, these theories, our best current guides to the universe, are fundamentally incompatible. While quantum theory requires masses to display Heisenberg uncertainty, general relativity doesn’t allow it. In terms of information theory, gravity is fundamentally a classical entity – one that can support only bits, not qubits.

    “The ‘universal constructor’ is an all-powerful 3D printer that can be used to make anything possible”

    To unify the theories, we need to treat quantum and classical information on the same footing – and the science of can and can’t does just that. My colleague Vlatko Vedral and I have already done preliminary work using its principles to constrain existing and future proposals for quantum gravity. They can also be used to make predictions in contexts where both theories matter, but neither fully applies, such as in the interior of black holes or in the first moment of the big bang.

    The potential advantages don’t stop there, however. Can/can’t rules about the manipulation of information don’t depend on the existence of subjective beings to observe what is happening. They can therefore give us an objective handle on other properties based on information that, in the traditional approach, seem only subjectively defined and thus out of the reach of physics.

    The most interesting property of this type is knowledge: the kind of resilient information brought about by evolution and created in our brains when we think. In the can and can’t picture, knowledge is described not in terms of subjective features of knowing about things, but simply as information that can enable its own survival. On this basis, we can attempt to formulate exact physical laws about how knowledge is created, or whether it is finite or unbounded – questions that are beyond traditional physics.

    Being capable of producing knowledge is a characteristic trait of conscious entities, so an exact theory of knowledge, fully rooted within physics, would be an essential stepping stone towards a theory of consciousness or general artificial intelligence. It might give us new tools to look for alien life too. At present, we are limited to searching for life elsewhere in the cosmos by looking for its chemical signatures, even though we have no guarantee it is based on the same chemistry as the life we do know. A physical theory of knowledge is likely to provide more generally applicable predictions.

    But is it true?
    As yet, these ideas are all theoretical, but there are promising avenues to test them. One concerns the phenomenon known as entanglement, a type of correlation between different particles or qubits that is stronger than any classical correlation between the properties of two objects. Vedral and I have shown that the science of can and can’t predicts what transformations are possible for two qubits interacting with another object that may or may not obey quantum theory, such as a macroscopic biomolecule, or even gravity. As a result, we can test for the presence of elusive quantum effects in an unknown system by setting up an experiment in which this “mystery” object is the only channel of interaction between the two qubits. If the mystery object can entangle the qubits, then we can conclude that it must have some quantum features, in a way that is independent of the laws of motion governing the unknown system.

    Several groups are now trying to test this experimentally, having the qubits be two quantum masses and the unknown system be gravity. If entanglement were to be observed, it would be the first empirical refutation of classical theories of gravity, including general relativity, as well as the first test of the principles of the science of can and can’t.

    That is an exciting prospect, even if making such experiments work is challenging and probably still a few years away yet. But let’s circle back to where we started, with the idea of the technological breakthroughs we anticipate on the back of a universal quantum computer. In the 1940s, mathematician John von Neumann pointed out that the universal computer, one capable of all physically permitted computations, isn’t the most universal machine that can be programmed. He conceived the “universal constructor”, a machine that can perform all physically possible transformations – essentially an all-powerful 3D printer that, provided with the requisite knowledge, can be programmed to produce anything that is physically possible.

    Von Neumann never managed to develop a physical basis for his universal constructor, let alone engineer one. The science of can and can’t, when fully developed, is the best candidate for the theory that underlies the universal constructor. That is why the collection of research projects aiming to implement the science of can and can’t is called the Constructor Theory Programme. Originally proposed by David Deutsch, it is now being pursued by my group at the University of Oxford, and our collaborators at the Centre for Quantum Technologies in Singapore, and the Institute for Scientific Interchange and Italian National Metrology Institute, both in Turin.

    Our hope is that constructor theory will be critical for the technological revolution after quantum computation, just as thermodynamics helped spur the original industrial revolution, or Alan Turing’s ideas about universal computation informed the information-technology revolution.

    Will it be? The honest answer is that it is too soon to tell. Science is tentative: the faster we make errors, the more chances we have to make progress. Physics is full of open problems that are too often swept under the carpet. Far from being undesirable, they are rich opportunities to find the next breakthrough. There is no guarantee that the science of can and can’t will succeed, but it will teach us a lot of new physics by solving some of those problems. It already has. There’s a saying that “the best way to predict the future is to invent it”. The science of can and can’t is one of our most promising bets to invent the future.

    second law of thermodynamics, for instance, is that when heat is generated, say through friction on a flywheel, you can’t reverse the energy transfer and convert the heat back entirely into useful work, say to drive a piston. This seems to clash with the reversible laws governing the microscopic particles of the flywheel and piston, which say that if a forward motion is allowed, so is its reverse.

    The standard way of explaining away this contradiction is to say that thermodynamic laws are “emergent” approximations of what is going on at microscopic scales. They are valid only in a statistical sense for large numbers of particles: the reversible, microscopic laws of motion are the fundamental laws.

    One consequence is that the laws of thermodynamics as they stand are insufficient to build engines made of just a few particles, a stumbling block on the way to developing nanomachines. These could have a plethora of applications, from repairing cells in our bodies to removing harmful chemicals from the atmosphere.

    CYCLING BACKWARDS
    The “science of can and can’t” approach that my colleagues and I are developing (see main story) takes a different path. It says that a thermodynamic transformation is possible when it can be brought about on a system to an arbitrarily high accuracy, with an arbitrarily small error, by an entity that operates in a cycle, reliably.

    So, for example, a mechanical stirrer can increase the temperature of an otherwise isolated mass of water by increasing the kinetic energy of its molecules. But here, reversing the trajectory doesn’t perform the inverse operation of cooling the water: that requires a refrigerator, a cyclic machine that goes far beyond just the stirrer’s atoms running backwards.

    So being able to transform something doesn’t always mean that its reverse transformation is possible – and irreversibility formulated in terms of possibility and impossibility doesn’t clash with time-reversal-symmetric laws. In the science of can and can’t, it is possible to formulate an upgraded second law of thermodynamics that is valid, exactly, at all scales, and regardless of the dynamical laws the particles are following.
     
    #797
  18. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    Everything we know about the universe – and a few things we don't


    How big is the universe? What shape is it? How fast is it expanding? And when will it end? We answer these questions and more in our essential guide to the current state of cosmological knowledge



    SPACE 30 December 2020
    By Stuart Clark

    please log in to view this image

    Giacomo Gambineri

    HOW OLD IS THE UNIVERSE?

    A CENTURY ago, if you asked a cosmologist the universe’s age, the answer may well have been “infinite”. It was a neat way to sidestep the question of how it formed, and the idea had been enshrined in 1917 when Albert Einstein presented his model of a static universe through his general theory of relativity.

    General relativity describes gravity, the force that sculpts the universe, as the result of mass warping its fabric, space-time. In the mid-1920s, astrophysicist George Lemaître showed that according to the theory, the universe wasn’t static but expanding– and would thus have been smaller in the past.

    Lemaître’s idea that everything there is was once contained in a single “primordial atom” was transformed in the 1960s, when astronomers discovered the most ancient light in the universe, the cosmic microwave background. This indicated that everything had begun in a hot, dense state: the big bang.





    These days, most cosmologists are confident that happened about 13.85 billion years ago. The figure is based on estimates of the universe’s expansion. There is some uncertainty there, because methods for estimating that rate spit out different values (see “How fast is the universe expanding”). The possible range of ages is between 12 billion and 14.5 billion years.

    We can cross-check that against the oldest star we know. It’s clear that HD 140283, aka the Methuselah star, is ancient because it is made almost entirely of hydrogen and helium, the predominant elements in existence in the aftermath of the big bang. Now, astronomers reckon it is 14.46billion years old, give or take 0.8 billion years. That could make it slightly older than the universe.

    But the fact that the age of the oldest star we can find is so close to our estimates of the universe’s age suggests that the standard model of cosmology – our general relativistic model of how the universe evolved, which supplies these estimates– is secure. How long the cosmos has existed isn’t really in doubt. For many other properties of the universe, we can’t be so sure.

    HOW BIG IS THE UNIVERSE?
    Stare out at the night sky for any length of time and you’ll ponder how far it all extends. For most of human history, the universe was commonly thought to be separate from Earth and the stars surrounding it – a sort of no-man’s land between us and heaven. Yet since the scientific revolution in the 17th century, astronomers have come up with various ways to measure distances to celestial objects.

    These methods are collectively known as the cosmic distance ladder (see “Ladder to the stars”). “It’s basically a bootstrap thing,” says James Schombert at the University of Oregon. Each part of the ladder builds on the one below until, eventually, you reach distant celestial objects bright enough to be seen across the grandest cosmic scales: galaxies and exploding stars called supernovae.

    please log in to view this image

    This means we can measure the universe in its entirety, or at least we can try. The most distant galaxy known is GN-z11. Light from it has taken 13.4 billion years to reach us – most of the age of the universe. In that time, space-time has expanded. Working from the rate of expansion given by the standard model, this galaxy is probably now about 32 billion light years away from us. Extrapolating to the entire observable universe, astronomers estimate it has a diameter of 93 billion light years, or very roughly 1026 metres (100 million billion billion kilometres).

    But that is just the distance between the furthest things we can see. “You don’t walk 1026 metres and then hit a brick wall,” says Tony Padilla at the University of Nottingham, UK. “The universe goes beyond that.”

    We can’t see past this cosmological horizon. Instead, we make inferences based on what the standard model of cosmology tells us. Most cosmologists believe that, immediately after the big bang, the universe underwent a moment of exponential expansion known as cosmic inflation. It is the best way to square our observations of a smooth, uniform universe at the grandest scales with the big bang, because quantum theory tells us that tiny energy fluctuations in random places would have created an uneven distribution of matter. Without inflation, that randomness could not have evened out during the time since the big bang.

    Inflation also happens to suggest a universe much larger than that we can see. Whereas the “inflation” field thought to have powered it stopped at some point in our region of the wider universe, it would continue to spark fresh bouts of inflation elsewhere. “You get really big universes in these [eternal inflation] scenarios, and I mean just off-the-scale huge,” says Padilla.

    Whether or not they are part of our universe, or separate, is a matter of perspective (see “How many universes are there?”). Clearly though, to understand the size of the universe beyond the cosmological horizon, we need a better picture of the universe’s first moments.

    “Cosmic inflation suggests a universe much larger than the one we can see”

    HOW FAST IS THE UNIVERSE EXPANDING?
    Space-time is getting bigger all the time, like dough rising in the oven. The observational proof of this came in 1929, when astronomer Edwin Hubble demonstrated that distant galaxies are speeding away from our own. We have even been able to clock the expansion rate, measured as the speed at which every million parsecs of space expands per second, by measuring the distance to numerous galaxies and comparing those distances with their redshift – the extent to which light emitted by each galaxy is stretched as a result of the universe’s expansion.

    In the early 2000s, the Hubble Space Telescope showed that the current expansion rate was close to 75 kilometres per second per megaparsec. Cosmologists thought they had this one nailed. All that remained was to measure how much this rate was slowing as the gravitational pull of all the universe’s matter and energy fought to drag things together. When the answer came, it broke everything.

    please log in to view this image

    The most distant known galaxy is GN-z11, roughly 32 billion light years from Earth

    NASA, ESA, and P. Oesch (Yale University)

    In the late 1990s, we discovered that the expansion wasn’t slowing down at all. On the contrary, it was speeding up – and nothing in known physics could explain it. The only thing that might fit the bill was a fudge factor that Einstein had included in his equations of general relativity when he thought the universe was static. Dialled up, this “cosmological constant” could reverse the deceleration from gravity and power an accelerating expansion. This was the birth of dark energy, a mysterious addition to the standard model of cosmology that continues to evade characterisation.

    Learn more about dark energy at New Scientist Academy:Biggest Mysteries of the Cosmos online course
    The conundrum only got more intractable in 2013, when the Planck satellite from the European Space Agency (ESA) returned the most precise map yet of the cosmic microwave background. Feeding that data into the standard model and running the clock forwards, researchers calculated that the universe should be expanding at 68kilometres per second per megaparsec– slower than the rate we get from supernovae.

    To bring the two values into alignment, physicists refined their calculations and better quantified the sources of possible error, only to see the discrepancy grow. The tension means that the standard model is incapable of describing the universe as we observe it. Now some cosmologists are wondering if general relativity, the model’s foundation stone, needs resetting.

    There is certainly wiggle room. Tessa Baker, a cosmologist at Queen Mary University ofLondon, says that although tests of gravity across the solar system, and in other specific situations, are extraordinarily precise, there is still plenty of scope for gravity to work differently from how Einstein predicted at the largest cosmological scales. “The experimental bounds we have on gravity operating over distance scales of megaparsecs or so are really weak,” she says. The strength of gravity could plausibly be 10 to 20 per cent stronger on those scales, she adds.

    Naturally, theorists are having a field day. But Chris Van Den Broeck, a physicist at the National Institute for Subatomic Physics inAmsterdam, the Netherlands, isn’t ready to sound the standard model’s death knell yet. “The tension is there, but I’m not yet convinced that we should panic,” he says.

    HOW HEAVY IS THE UNIVERSE?
    Calculating how much stuff there is in the universe has long preoccupied cosmologists, largely because it seems that so much of it is invisible.

    Take dark matter, so named because it doesn’t interact with light. This mysterious source of mass was invoked to explain how galaxies and clusters of galaxies hold together when we realised that the gravitational pull of ordinary visible matter alone isn’t enough to do the job. It has since become a vital component of the standard model, its hidden gravitational hand sculpting the structure of the cosmos.

    We still haven’t detected dark matter. Yet by looking at the pattern of temperature fluctuation in the cosmic microwave background, indicative of the interplay of matter and energy in the early universe, physicists are able to estimate its abundance compared with ordinary matter. The upshot is that dark matter outweighs normal matter by more than 5 to 1. The cosmos is roughly 5per cent ordinary matter, 27per cent dark matter and 68per cent dark energy– that other mysterious form of mass/energy. This much is gospel – at least for now.

    Recently, however, a puzzle has emerged from measurements of the extent to which galaxies clump together on a scale of 8 kiloparsecs. The value of this quantity, known as sigma-8, depends on how much mass there is in the universe, because it is the gravity resulting from this mass that pulls the clusters together. We can measure it based on observations or we can predict it based on the standard model. Again, precise measurements produce a troubling discrepancy.

    please log in to view this image

    The LISA Pathfinder satellite, a prototype for a probe that could reveal the true shape of the universe

    ESA/ATG MEDIALAB

    Working from established ratios of the different kinds of matter and the behaviour of gravity as described by general relativity, the standard model predicts that sigma-8 should be 0.81. But when Hendrik Hildebrandt at Ruhr-University Bochum, Germany, and his colleagues measured this value in 2017, they got a different answer. He and his team used a technique called weak gravitational lensing, which measures the extent to which light from distant galaxies is distorted by massive objects between us and them. Their value for sigma-8 came out at 0.74, suggesting that there is less matter in the universe than we predict when using the standard model.

    Future observatories such as the ground-based Vera Rubin Observatory and the ESA’s space-borne Euclid mission are scheduled to devote time to refining this measurement. If the discrepancy remains, it will need explaining. If it can’t be explained, then there’s another reason to think our standard cosmology needs an overhaul.

    “The latest measurements suggest there is less matter in the universe than expected”

    WHAT SHAPE IS THE UNIVERSE?
    When cosmologists talk about the geometry of the universe, they are referring to the overall shape of space-time. In our expanding universe, there are basically two possibilities. If the gravity produced by all matter is stronger than the expansion, it will ultimately pull everything back together. In that case, we are living in a “closed” or spherical universe. If whatever is driving the expansion overpowers gravity, however, then we have a perpetually expanding or “open” universe that looks like a saddle (see”Cosmic contours”, below).

    Intriguingly, however, the universe seems to be precariously balanced between these two options. The theory of cosmic inflation helps to explain this fluke by ironing out our perception of any overall curvature, and the idea that we reside in a flat universe is now hardwired into cosmology’s standard model. Even so, there are suspicions.

    Alessandro Melchiorri at Sapienza University in Rome, Italy, is part of a team that analysed the latest data from the Planck mission, which measured temperature fluctuations across the cosmic microwave background to the most precise level ever achieved. One thing the researchers analysed was the extent to which light from the cosmic microwave background is distorted by the process of weak gravitational lensing as it travels towards us. They found more lensing than the standard model of cosmology predicts – unless you remove the assumption of a flat universe. “If you perform a model fit, leaving the curvature to vary, you see that the best solution is a closed universe with more dark matter,” says Melchiorri.

    But as Melchiorri and his colleagues demonstrated in a follow-up study earlier this year, a closed universe exacerbates the discrepancies cosmologists are seeing elsewhere in the standard model, not least the fact that the universe seems to be expanding faster than predictions suggest it should. Explaining that gets even harder if the universe is spherical rather than flat.


    Pretty much every other measurement we have suggests that the universe is flat. It is possible that this latest observation is a statistical fluke that disappears with new cosmological surveys from the Vera Rubin telescope or the Euclid satellite, for instance.

    If not, however, then the best way forward is to extract better data about the true nature of the big bang and cosmic inflation. This is where gravitational waves come in. These ripples in space-time, best known as the result of collisions between distant black holes, can also open a window onto the early universe if we can detect any that have made their way to us from the furthest reaches of the cosmos. “There are a bunch of [cosmological] mechanisms that could conceivably have caused a flash of gravitational radiation a fraction of a second after the big bang,” says Van Den Broeck – mechanisms like inflation.

    Primordial gravitational waves would be visible today as a background of ripples, coming from all directions. They are distinct in the sense that they would have much longer wavelengths than the ones we have detected from black hole collisions, thanks to the expansion of the universe. Our best ground-based gravitational wave detectors operate at too high a frequency to see them. But the ESA’s planned space-based detector, the Laser Interferometer Space Antenna (LISA), could.

    “If we could also see primordial gravitational waves that would be really thrilling,” says Padilla. “Then we would really start to learn a lot about the universe.” Perhaps most importantly, we could learn whether inflation truly happened– and whether the universe is flat after all.

    HOW MANY UNIVERSES ARE THERE?
    As mentioned earlier, when cosmologists came up with cosmic inflation, the idea that the early universe ballooned exponentially in a moment, they quickly realised they may have got more than they bargained for. “Inflation can happen anywhere in space and time,” says Padilla, “It happened in our patch of the universe a long time ago, and it made our corner of the universe very large, but there could be different parts of the universe where it’s still going on.”

    This scenario, known as eternal inflation, produces a pantheon of different “bubble” universes, all crowded together, with more budding off all the time. Welcome to the inflationary multiverse. There is no way to observe or measure it because all the bubble universes it contains lie outside the limits of our observable universe. Instead, many cosmologists are convinced it exists because it is a logical consequence of two theories, inflation and quantum mechanics, that have been demonstrated to be valid to varying degrees.

    Not being able to see them hasn’t prevented people from speculating about how many universes there might be, and what they might contain.

    “You’d barely notice the end of the universe. Blink and it will all be over”

    With the standard-issue inflationary multiverse, the number of universes is endless. What we find in each one could be something wildly different from the universe we know. This idea of a cosmic pick-and-mix grew out of attempts to explain gravity in the same way as the other three forces of nature, as a quantum force. These string theories replace familiar point-like particles with tiny vibrating strings that exist in multiple dimensions – normally 10or 11 of them, depending on your preferred version– and predict a vast landscape of at least 10500 different possibilities for how physics might look in the myriad bubbles of the inflationary multiverse. Each would have different physical laws and different values for the constants of nature.

    Or maybe there is just one other universe, and we have already seen tangible evidence of its existence. In 2016, the Antarctic Impulsive Transient Antenna (ANITA) detected a high-energy particle that instead of heading in from space, appeared to be blasting out of Earth. Two years later, it made a second such discovery. One explanation is that the particle might have come from a parallel universe created concurrently with our own, but travelling backwards in time.

    WHEN WILL THE UNIVERSE END?
    Before the discovery of dark energy, the mysterious force thought to be pushing space-time apart, the future of the universe depended on geometry. Either the cosmos was closed and would collapse in on itself in a”big crunch” or it was open and would expand forever. Now, however, cosmology’s standard model assumes that we live in a flat universe that, thanks to dark energy, will expand eternally.

    If dark energy is nothing more exotic than a cosmological constant, meaning it doesn’t fluctuate over time, then the expansion of the universe will itself eventually become a constant, carrying the clusters of galaxies ever further away from one another. “We’ll be left pretty much alone in the universe,” says Baker. In this scenario, sometimes called the heat death of the universe, or the big freeze, all the stars eventually die, black holes grow larger and the remaining matter in the universe tends to equalise in temperature. With no difference in temperature, energy cannot flow and gradually the universe enters a kind of cosmic senescence, where nothing much happens at all.

    An alternative is the big rip. Here, the dark energy keeps getting stronger and the expansion of the universe keeps accelerating. “This is more exciting,” says Baker, “Even gravitationally bound objects like a galaxy can eventually get pulled apart”, as dark energy overpowers the gravity holding the celestial objects together.

    Which of these scenarios is correct will only be revealed once we know the nature of dark energy. But before you get too comfortable thinking that this is all so far in the future that you don’t need to worry about it, there is one way that the universe could end tomorrow. It rests on the idea, from string theory, that there is a vast landscape of universes with different physical laws. If so, our universe could perform a quantum trick called tunnelling, in which it would suddenly transform itself into a universe with different properties. The constants of nature and perhaps even the laws of physics would be nothing like the ones we know.

    That wouldn’t be ideal, to say the least, because the structure of atoms relies on the delicate balance between the forces of nature. Upset it, and the atoms that comprise everything could disintegrate in a flash. “If we undergo one of these phase transitions at teatime tomorrow, you’d barely notice it,” says Baker. “Blink and it’ll all be over.”

    The ultimate question for cosmologists, then, might be whether or not they can figure out if their beloved standard model is correct before quantum oblivion beckons. Watch this space-time continuum.

    More on these topics:


    TRENDINGLATESTVIDEOFREE
    1. We now have the technology to develop vaccines that spread themselves
    2. What the battle between Epic Games and Apple could mean for you
    3. The platypus: What nature’s weirdest mammal says about our origins
    4. Covid-19 news: UK under-40s to get alternative to AstraZeneca vaccine
     
    #798
  19. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    David Eagleman interview: How our brains could create whole new senses
    Neuroplasticity, or the brain's ability to remodel itself, enables us to interpret all kinds of sensations. We can use that to create new ways to perceive the world, says neuroscientist David Eagleman

    Clare Wilson

    please log in to view this image

    Rocio Montoya

    WOULD you like to be hooked up to a device that lets you detect magnetic fields like a bird? How about sensing infrared light like a snake? Perhaps a feed of real-time stock market data into your mind is more your sort of thing. According to David Eagleman, a neuroscientist at Stanford University in California, it will soon be possible to make all this a reality.

    He has already created technologies along these lines, including a wristwatch-like device called Buzz that translates sound into patterns of vibration on the skin. Interpreting those vibrations effectively gives deaf people who use it a new kind of hearing.

    The inspiration for these ideas grew out of Eagleman’s study of neuroplasticity, the brain’s incredible ability to reforge itself in response to new experiences. In his latest book, Livewired: The inside story of the ever-changing brain, he examines just how the brain pulls off such wholesale changes and explores the extent to which we can harness this ability to learn new tricks.





    Eagleman says neuroscientists still have a lot to learn about how the brain changes. Much of the focus has been on synapses, the connections between brain cells called neurons. But there are deeper and more mysterious ways in which the brain is changing all the time, he says, and we are guilty of overlooking them. As we learn more about the brain and begin to enhance it with new technology, we might gain some intriguing new abilities.

    Clare Wilson: You study the way the brain changes in response to our experiences. How does it do that?

    David Eagleman: When you learn that my name is David Eagleman there are physical changes in the structure of your brain. That’s what lets you remember who I am. We often say the brain has plasticity, meaning it can be moulded like plastic. But I feel the term plasticity isn’t big enough to capture the way that the whole system is moving. Instead, I use the term “livewired” to represent that you have billions of neurons reconfiguring their circuitry every second. The connections between them are changing their strength and unplugging and re-plugging in elsewhere.

    So this is about more than just changes in nerve connections?

    Neuroscientists have focused too much on synapses, like drunks looking for their keys under the streetlight even though that may not be where they dropped them. It’s easy to measure synaptic strength. But actually, the brain has many other layers of change which are less easy to study. The inside of a neuron is like a city, and you have all this communication and roadways and infrastructure – all that changes too. Then there are epigenetic changes, which means that in the nucleus of the neuron, the DNA changes shape so that some genes are expressed more while others are suppressed.

    Why would the brain need many different mechanisms of plasticity?

    I hypothesise that there are different timescales of change. By way of analogy, some things in a city change quickly, like fashions, and other things change more slowly, like which restaurants are in the buildings. Some things are even slower, like the governance, rules and laws. It’s similar with the brain. When you learn something new, some parts immediately start adapting, but it’s only if what you’ve learned has relevance and stays consistent that the next layers down say: “OK, that seems like something to hold on to.”

    How well do we understand these deeper layers?

    We have a long way to go. It hasn’t been easy to build a theoretical framework to understand how this happens because, when you look under the hood, what you find in the brain is a system of such complexity that it bankrupts our language. We have no way to understand what 86 billion neurons are doing in there. It’s a living fabric, with communities and marriages and divorces.

    There are claims that adults have new brain cells developing all the time. Do they play a role in resculpting our brains?

    There’s controversy about whether this happens at a meaningful level in humans. On balance, it appears that it’s a very small feature of neuroplasticity. The thing that is interesting and mysterious is this: if you insert new neurons into a network, how come that doesn’t mess it up? If you took an artificial neural network and suddenly inserted new nodes, you’d degrade its performance. Yet somehow that doesn’t happen with humans, which just demonstrates that we have a long way to go to understand what’s going on in our brains.

    Do we see the effects of neuroplasticity in everyday life?

    You see it every time you jump on a bicycle or a skateboard. It’s as if, instead of being born with two legs, you wereborn with wheels, and the brain has figured out how to operate its new body. Every time you have to learn something, it is thanks to plastic changes in your brain. When you try a new musical instrument or a new skill, like juggling, we can see changes in the physical structure of your brain. You can tell the difference between, for example, a violinist and a pianist with the naked eye at autopsy or with brain imaging. The violinist is using one hand with great precision and just bowing with the other hand, so only one side of their motor cortex – which is the part of the brain that drives the body – grows larger, in a particular spot that controls the fingers. With a pianist, both the right and the left sides grow.

    Does neuroplasticity ever put us at a disadvantage?

    Mother nature is taking a sort of gamble with humans, in that she drops our brains into the world half-baked and lets experience take over and shape them. Our babies have much less well-developed brains than other animals do at birth. All in all, this has been a successful strategy. We’ve taken over every corner of the planet, invented the internet – even gotten off the planet, to the moon. But it means that, to develop properly, children require the right sort of input of language and touch and attention and love. In rare cases when a child has been severely neglected, their brains can’t do that.

    You think there is more we could be doing to make plasticity work to our advantage…

    About a decade ago, I got really interested in whether we can create new senses. You have your eyes, ears and nose, but when you look across the animal kingdom, you find animals with detectors that can pick up on things like magnetic fields, electrical fields or ultraviolet light. It just depends what sensors they have. I began to understand our sense organs as “plug and play” detectors. Nature doesn’t have to redesign the brain every time she makes a new detector. Instead, she tinkers with different ways of sensing energy. That opens up the idea of creating new kinds of detectors to plug in.

    What kinds of new detectors do you have in mind?

    My lab began by creating a vest that’s covered with vibratory motors. With that, we could translate any kind of data into patterns of vibration on the skin. We more recently shrunk that into a wristband. We can feed in any kind of data, say infrared or ultraviolet light seen by a robot or a drone – or even stock market data. We can also capture information about the state of your body, like your blood pressure and heart rate.

    please log in to view this image

    The Buzz wristband creates patterns of vibration on the skin

    Neosensory

    The wristband is now a product called Buzz that captures sound and turns it into patterns of vibration through four motors. That information on the skin follows the nerves up to your brain, which has no problem learning how to come to an understanding of it. Thousands of deaf people are using it. Every day we get emails from people who say they suddenly realised that they left the water running or they can tell the difference between their two dogs barking.

    Do you think people will be able to understand speech purely through the device?

    Let’s take the wristband. The area of skin on which it can create vibrations is pretty small, but even so it can create 4 billion different patterns. If I just hold on to the wristband and say “One, two, three, four”, you can clearly feel the difference between all these words. So it’s quite high resolution. But the question is: at how high a resolution is your brain reading this information from the skin? People get better and better with time, but what we don’t know yet is what the upper limits are. We haven’t had somebody wear this for a year yet. I can’t wait until we test people who’ve been wearing this for three years. Because of plasticity, their brains will devote more real estate to understanding the information that comes from the device.

    What does it feel like to use?

    The first time that you put on Buzz, it just feels like a vibration on your wrist. For example, if you see the dog’s mouth moving and you feel the buzzing on your wrist, you suddenly realise: “Oh, I get it, the dog is barking.” But over the course of a few months, it becomes like hearing. When we talk to participants about this, we say: “Do you feel a buzzing on your wrist and you think, ‘Oh, that must be a dog barking?'” They say: “No, I’m just hearing the dog.”

    please log in to view this image

    Brain cells are constantly reconfiguring their connections

    Dr. Chris Henstridge/Science Photo Library

    This is exactly how your ears work. When you were an infant, you had to learn how to understand the signals from your inner ears – your brain wasn’t born knowing how to do it. When I’m speaking, you don’t feel like: “There’s some high-frequency sounds, and low frequency and some medium, he must be saying this word.” Instead, you just have the experience of hearing. That’s what happens with Buzz.

    Do you think you could go further with this approach?

    Yes. I live in Silicon Valley and everything here is about hardware and software. But what’s happening in the brain suggests a completely different approach to building technology – call it live-ware. So I’m interested in building systems that aren’t just software but physically reconfigure themselves based on experiences like the brain does. In this way, it would become fast and efficient at the tasks that it does a lot. I feel like we are at the foot of the mountain looking up at it. At the moment, we have no idea how to build this kind of machinery. But I’m excited to see what will happen in the next few decades.
     
    #799
  20. Prince Knut

    Prince Knut GC Thread Terminator

    Joined:
    May 23, 2011
    Messages:
    25,482
    Likes Received:
    12,840
    s everything predetermined? Why physicists are reviving a taboo idea
    Superdeterminism makes sense of the quantum world by suggesting it is not as random as it seems, but critics say it undermines the whole premise of science. Does the idea deserve its terrible reputation?

    PHYSICS 12 May 2021
    By Michael Brooks

    please log in to view this image

    Pete Reynolds

    I’VE never worked on anything so unpopular!” Sabine Hossenfelder, a theoretical physicist at the Frankfurt Institute for Advanced Studies in Germany, laughs as she says it, but she is clearly frustrated.

    The idea she is exploring has to do with the biggest mystery in quantum theory, namely what happens when the fuzzy, undecided quantum realm is distilled into something definite, something we would experience as real. Are the results of this genesis entirely random, as the theory suggests?

    Albert Einstein was in no doubt: God, he argued, doesn’t play dice with the universe. Hossenfelder is inclined to agree. Now, she and a handful of other physicists are stoking controversy by attempting to revive a non-random, “deterministic” idea where effects always have a cause. The strangeness of quantum mechanics, they say, only arises because we have been working with a limited view of the quantum world.





    The stakes are high. Superdeterminism, as this idea is known, wouldn’t only make sense of quantum theory a century after it was conceived. It could also provide the key to uniting quantum theory with relativity to create the final theory of the universe. Hossenfelder and her colleagues aren’t exactly being cheered on from the sidelines, however. Many theorists are adamant that superdeterminism is the most dangerous idea in physics. Take its implications seriously, they argue, and you undermine the whole edifice of science.

    So what is the answer? Does superdeterminism deserve its bad reputation or, in the absence of a better solution, do we have little choice but to give it a chance?

    Quantum theory describes the behaviour of matter at its most basic, the atoms and their constituent particles. It was conceived to make sense of the observed behaviour of atoms, and resulted in physicists claiming that particles behave like waves and can appear to be in several different states at once, known as being in a superposition. The idea is that only when we observe those particles directly do they assume definite properties.

    Erwin Schrödinger came up with an equation to capture the fuzziness of the quantum realm, showing that it could be represented by a mathematical entity later called a wave function. This gives the probability that a quantum object will manifest as a particular state or place upon measurement. But it can’t say for certain.

    In fact, we only get agreement between theory and experiment when we average out the results of lots of measurements of identical quantum objects. That leads to the assumption that each individual outcome occurs at random, and thus that, at the most fundamental level we know, the universe is indeterministic – governed by chance.

    That is hard to swallow for many people because we experience a world in which effects always have a cause. Some have tried to fix the situation by suggesting that the simultaneous different states of an atom in superposition are a reflection of different realities occurring in separate universes. Or by saying that the atoms just don’t have any properties and don’t really exist when they aren’t being observed.

    please log in to view this image

    Critics have invoked the film Back to the Future to argue against superdeterminism

    Universal Pictures/Photo 12/Alamy

    But none of these interpretations make sense to Hossenfelder. All of them contain contradictions, she says, which is why she is working to explore superdeterminism.

    Broadly speaking, this is the idea that the outcome of any measurement is due to factors involved in the measurement, such as the measuring apparatus and its settings. What we see is determined by all these factors, including, perhaps, factors that are hidden from us.

    To get to grips with the concept, first you have to understand an idea put forward by physicist John Bell in the 1980s. Bell proposed a scheme for testing whether there was merit to another of Einstein’s reservations about quantum theory, this time about “spooky action at a distance”, as Einstein dubbed it, in which measurements of one particle seem to influence the outcome of measurements of another, spatially distant particle.

    “The resulting theory could have all the consequences of quantum mechanics, but none of the weirdness”

    Bell’s scheme concerned the statistical outcome of a series of measurements on two particles with quantum properties that are “entangled” with each other because of some interaction in the past. He showed that if these non-local correlations aren’t real, then there is a minimum probability with which you should get a particular outcome, such as the same result from both measurements. It can be greater than or equal to this value, but it can’t be lower. In mathematical terms, it is known as an inequality. If you find you are getting the outcome of interest less often than you would expect, “Bell’s inequality” is being violated – and you know the results are being skewed by a non-local correlation between your particles.

    Forbidden choices
    Experiments have demonstrated that it is possible to violate Bell’s inequality. This seems to prove that the spooky action is real, in spite of Einstein’s objections. But the proof depends on an assumption about the measurement.

    Specifically, Hossenfelder questions the assumption that the experimenter is entirely free to choose the “basis” of any given measurement. Given a collection of gloves, say, you might choose to compare them on the basis of their size, handedness or colour – or a combination of these. But what if certain measurement choices are forbidden by some as-yet-unknown law of physics? What if, for instance, the universe won’t let you measure the colour of a right-hand glove independently of measuring its size? If that were the case, you might get a strange result when you tried to do it. And if you didn’t know about that law, you might conclude that some fundamental weirdness exists in the world of gloves.

    The idea that nature forbids certain choices might seem far-fetched, but quantum theory itself was built on a far-fetched restriction. Max Planck only discovered it through what he called “an act of desperation”: conjecturing that atomic stuff comes in definable lumps of energy, with certain energies forbidden. What’s more, we have since discovered that atomic structures such as two electrons can’t occupy the same quantum state inside an atom, which is known as the Pauli exclusion principle. “In that case, no one asks, ‘how is it that I can’t put these electrons in the state that I want to?'” says Hossenfelder. “You just say it’s a law of nature: that’s just how it is.”

    please log in to view this image

    Superdeterminism suggests that in the quantum realm, some choices are off limits

    Luis Jou Garcia/Getty Images

    To move the idea along, Hossenfelder is creating a model of reality that forbids certain combinations of quantum states from existing. She is hoping that the resulting idea will have all the consequences of quantum mechanics, but none of the weirdness.

    She isn’t the first to lean in this direction. Physics Nobel laureate Gerard ‘t Hooft at Utrecht University in the Netherlands has proposed something along these lines. And Tim Palmer, a physicist at the University of Oxford, is joining their ranks. Palmer used to work at the European Centre for Medium-Range Weather Forecasts, and his experience with the physics of chaotic systems like weather has led him to formulate a chaos-based superdeterminism idea that he thinks might explain the quantum world.

    In chaos theory, systems such as weather patterns evolve in a way that is extremely sensitive to their initial conditions. Tiny changes in the set-up lead to huge divergences in the later characteristics of the system. At the same time, some chaotic systems will always converge towards a particular set of characteristics. This can be captured in a mathematical entity called a chaotic “attractor”, which maps their movement through all the possible states towards these almost inevitable outcomes. The attractor has a curious, often-ignored property, however. In any chaotic system, there are states and situations that are just off-limits. The attractor’s blank space defines which states are impossible to access.

    Palmer has been investigating what it would mean if our universe were subject to the same constraints. “I imagined that the universe is a chaotic system evolving on its own attractor,” he says. Then he pictured conducting a Bell experiment in this chaotic universe. The experimenters would still make choices about the measurement basis, but Palmer thinks that certain combinations of quantum states could be unattainable, implying that some experimental choices defy the laws of physics. “These gaps in state space, the places the system is not allowed to go, give you the wiggle room you need to be able to violate Bell’s inequality without having to fall back on indeterminacy or non-locality,” says Palmer.

    He plans to put his ideas to the test, starting with an experiment that has to do with the number of quantum objects that can be entangled together. In standard quantum theory, there is no limit, but in Palmer’s scheme that number is finite. “After a certain number of entanglements, you’ll just go back to classical correlations,” he says. In February, with Jonte Hance and John Rarity at the University of Bristol, UK, Palmer published a range of experimental designs that are aimed at finding the limit, if it exists.

    Hossenfelder is even further along. Her design involves performing a set of repeated measurements on a quantum system. “A general prediction of superdeterminism is that the outcome of measurements is actually determined, not random. So you test it by checking whether quantum measurements are really random,” says Hossenfelder.

    In her design, each initial set-up should be an exact copy, as far as is possible, of the one before. If the universe is ultimately deterministic, the results should be more or less identical. But if probabilistic quantum mechanics really is fundamental, there would be easily discernible changes in the outcome of each run.

    That is harder than it sounds: you have to make sure you don’t introduce randomness through the measurement device, so the smaller it is and the lower the temperature it operates at, the better. “You also have to do the measurements in as fast a sequence as possible because the more time passes, the more likely it is that something wiggles,” says Hossenfelder.

    A vast conspiracy?
    The good news is that Siddharth Ghosh at the University of Cambridge has just the sort of set-up that Hossenfelder needs. Ghosh operates nano-sensors that can detect the presence of electrically charged particles and capture information about how similar they are to each other, or whether their captured properties vary at random. He plans to start setting up the experiment in the coming months.

    All of which sounds exciting. And yet a large number of philosophers and physicists scoff at the prospect, insisting that we don’t need to do experiments to rule out superdeterminism. Howard Wiseman at Griffith University in Queensland, Australia, pretty much sums up the objections when he says the idea would imply there is a “fine-tuning” conspiracy behind the laws of physics, meaning you have to input initial conditions by hand and choose them very precisely to make sense of observations. And that’s just for starters. Wiseman adds that superdeterminism would also undermine the notion of human free will and make the whole idea of doing science pointless. “I’m not a fan,” he says.

    This last point comes from the fact that superdeterminism violates something known to philosophers of science as “statistical independence”. This is the idea that tweaking the input to an experiment shouldn’t change anything in the equipment set-up to detect the output. The free will problem arises because, in the Bell test, the experimenter has to be able to set the experiment up however they want to. Superdeterminism says this isn’t possible because there are hidden constraints.

    But Wiseman thinks the fine-tuning argument is the most persuasive. He points out that experimenters have made deliberately ridiculous, random choices when performing Bell experiments – choosing sequences of binary digits from a digitised version of the 1985 movie Back to the Future to create the measurement settings, for example. Since superdeterminism explains Bell experiments through correlations between hidden variables in the quantum particles and the measurement settings, that would imply that the series of events and choices in the film’s production are correlated with the actions of the physicists carrying out the experiment.

    please log in to view this image

    If quantum randomness is an illusion, we may be able to make more robust quantum computers

    Xinhua/Alamy

    “According to superdeterminism, the explanation for these particular Bell correlations is that it is impossible to have a universe in which the hidden variables in the experimental photons are the same, but Michael J. Fox’s smile was 1 millimetre wider in one frame of Back to the Future,” says Wiseman. “It’s the most bizarrely fine-tuned theory imaginable. The whole thing is a vast conspiracy: an insanely complicated universe with unimaginably fine-tuned initial conditions.”

    Palmer rejects this argument. “It is most certainly not the case that superdeterminism says that there is any conspiracy,” he says. The problem, he says, is that people see fine-tuning because they are applying the wrong kind of mathematics. Physicists assume that the fundamentals of the universe evolve in a linear fashion, but they might well be chaotic, says Palmer. In which case, the linear equations we have won’t work.

    In any case, the choice of Back to the Future was itself curious. That is because, according to Huw Price, a philosopher at the University of Cambridge, backwards-in-time causation is the one thing that can make superdeterminism plausible. Price argues that superdeterminism works if you take the “block universe” perspective of Einstein’s special theory of relativity, where past, present and future all co-exist in a big four-dimensional grid we call space-time.

    In this scheme, time doesn’t run in any one direction. Time’s arrow isn’t fundamental to relativity – or to quantum theory, for that matter. And that means tweaking a detector’s settings to determine the properties of two entangled particles in a particular way can influence the properties those particles gained at what we would call an earlier time. “The setting that we chose has an influence on the particle all the way back to when it was first formed in some event that produced two particles,” says Price.

    “If there is backwards-in-time causation, science and human free will can be rescued”

    He believes this perspective has two advantages. First, it doesn’t require that the universe was set up in some precise, finely tuned way that gives the experiment its outcome. Second, it doesn’t undermine science and human free will. Emily Adlam, a philosopher of physics affiliated with the independent Basic Research Community for Physics, says she is willing to consider this idea. “An approach where the laws of nature apply all at once to the whole of history gives much less reason for concern.”

    Wiseman remains unconvinced. It is all just too vague, he says. People have been suggesting that some kind of “retrocausality” might resolve our problems with quantum theory for nearly 100 years, he points out, but no one has ever embedded an idea that works this way into Einstein’s space-time.

    And what would be the point? “I don’t see the motivation for wanting to deny the conclusion of Bell’s theorem,” says James Ladyman, a philosopher at the University of Bristol, UK. It isn’t the end of the world to accept the existence of a non-local phenomenon like entanglement, even if it does go against our metaphysical presuppositions.

    The proponents of superdeterminism think there is a lot to be gained, however. For a start, it might open the door to new kinds of technology, says Hossenfelder. The limits to how well we can make measurements come from the noise that has its roots in quantum randomness. If the apparent randomness is actually the result of controllable processes, perhaps we could push down the quantum noise. “I think this would be important for quantum computing – the major issue that they have is the noise,” says Hossenfelder.

    For Palmer, the benefits might lie in the quest to unite quantum theory with relativity to create a theory of quantum gravity. The usual approach is to modify relativity and leave the quantum stuff alone, but Palmer thinks that superdeterminism suggests this might be a mistake. “It’s going to require a lot more give from the quantum side than from the relativity side,” he says.

    There is no agreement in sight, and progress is slow because there are so few people working on superdeterminism. Nonetheless, their proposal for answering the riddle of quantum theory is worth exploring, says Adlam. “I wouldn’t say superdeterminism is my preferred route, but I think it’s definitely reasonable, and it deserves a lot more attention than it has received.”

    More on these topics:


    TRENDINGLATESTVIDEOFREE
    1. Nuclear reactions at Chernobyl are spiking in an inaccessible chamber
    2. Cerne Abbas Giant may have been carved into hill over 1000 years ago
    3. Covid-19 news: Pandemic should drive global health reform, says report
    4. Is everything predetermined? Why physicists are reviving a taboo idea
    5. Measuring time accurately increases the entropy in the universe
     
    #800

Share This Page