News Ticker

Menu

Browsing "Older Posts"

Browsing Category "TECHNOLOGY"

Rarefied Technologies: Harvard Startup Aims to Conquer the Ignorosphere

Thursday, January 16, 2025 / No Comments

 

Rarefied Co-Founder Ben Schafer pitching the technology at Greentown Labs. Photo credit: Rarefied

A new aerospace startup from Harvard is setting its sights on a previously untouched layer of Earth's atmosphere. Rarefied Technologies, founded by Angela Feldhaus and Benjamin Schafer, aims to explore the mesosphere, a region between 50 and 100 kilometers above Earth that has long been inaccessible to both airplanes and satellites.

The startup, launched this fall with support from the Harvard Grid Accelerator, is developing groundbreaking devices capable of levitating in this extreme environment to gather climate data and enable telecommunications networks. This innovation could lead to breakthroughs in weather prediction, disaster preparedness, and even defense.

“The mesosphere is often called ‘the ignorosphere’ because it’s too high for planes and too low for satellites,” said Schafer, who recently earned his PhD in Applied Physics at Harvard. “Rarefied is creating devices that can finally access this region.”

Innovative Technology in Uncharted Territory

Rarefied’s devices are engineered to operate in the mesosphere’s rarefied gas and low-pressure conditions—where atmospheric pressure is 5 million times lower than at sea level. Roughly the size of a grain of rice, these lightweight, ceramic-based structures are among the strongest ever created. Powered by sunlight, they can levitate indefinitely while carrying small loads like GPS systems.

The devices represent a technological leap, with potential applications in industries ranging from agriculture and insurance to defense and telecommunications. They are poised to provide valuable insights into how climate change affects the mesosphere, a region that is contracting and cooling due to human activity.

“Understanding this part of the atmosphere could transform our ability to predict weather and track natural disasters,” Schafer added.

From Harvard Lab to Commercial Launch

The startup’s journey began in the Joost Vlassak lab at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS). Schafer and Feldhaus developed the technology with funding from the Harvard Grid Accelerator, which bridges the gap between academic research and commercial viability.

Chris Petty, Director of Business Development for Physical Sciences at Harvard’s Office of Technology Development (OTD), highlighted the startup's potential. “Rarefied has articulated a clear market need while advancing the boundaries of science,” he said.

A Bright Future for Rarefied

Rarefied’s work has earned accolades, including Schafer’s inclusion in the 2025 Forbes 30 Under 30 list and fellowships with Breakthrough Energy and Los Alamos National Laboratory. The startup plans to conduct field tests within the next few years, with an eye toward scaling up its technology for larger applications.

“Not only are we creating something commercially viable, but we’re also pushing the boundaries of science,” Schafer noted. “This is just the beginning of understanding what’s possible in this unexplored field.”

About Rarefied Technologies

Rarefied Technologies focuses on developing ultra-lightweight, solar-powered devices to explore the mesosphere. The startup’s innovations promise to revolutionize climate research, telecommunications, and beyond.

Quinone-Based Carbon Capture: Safe, Sustainable CO2 Removal

Monday, January 13, 2025 / No Comments

 

Quinone-mediated electrochemical carbon capture experimental setup.The research, led by Kiana Amini, a former Harvard postdoctoral fellow now an assistant professor at the University of British Columbia, provides critical insights into the detailed chemistry of quinone-mediated carbon capture. The study showcases how these electrochemical systems work, utilizing the interplay between two types of electrochemical reactions—direct capture and indirect capture—to maximize CO2 removal.

Quinones, due to their abundant availability and versatility, have the potential to repeatedly bind and release CO2, making them ideal candidates for sustainable carbon capture technologies. Through advanced lab experiments, the team discovered that quinones not only directly interact with CO2 but also create conditions that allow CO2 to convert into stable compounds, significantly enhancing capture efficiency.

The study introduces two novel experimental techniques to measure the contributions of each mechanism in real-time. By using reference electrodes, researchers can observe voltage signature differences between quinones and their CO2 adducts, while fluorescence microscopy allows them to distinguish between various chemical states and quantify concentrations with high precision.

These findings pave the way for designing customized carbon capture systems that can be fine-tuned to meet specific industrial needs, from large-scale industrial applications to localized environmental solutions. Although challenges like oxygen sensitivity remain, this research provides valuable tools for improving system performance and scalability.

Supported by the National Science Foundation and the U.S. Department of Energy, this work highlights the potential of quinone-based carbon capture to revolutionize greenhouse gas removal technologies, offering a safe, cost-effective, and sustainable approach to combating climate change.

Tesla Has Benefited from Nearly £200m in UK Grants Since 2016

Wednesday, January 8, 2025 / No Comments

 

The Department for Transport grants for Tesla peaked at £61.6m in 2020 and have since declined. Photograph: William Barton/Alamy
Elon Musk's electric vehicle giant, Tesla, has received close to £200 million in grants from the UK government over the past nine years, according to a recent analysis. Data from Tussell, which monitors public contracts, reveals that Tesla has been awarded £191 million, with the bulk of the funding coming from the Department for Transport (DfT).

The grants, amounting to £188 million, were largely provided through the government’s plug-in car scheme. This initiative, launched in 2011, aimed to promote the adoption of electric and hybrid vehicles by offering discounts of up to £5,000 on new purchases. The scheme concluded in June 2022. During its peak in 2020, Tesla received £61.6 million from the DfT, but funding has steadily declined since then, with just £49,000 granted in the first half of last year.

Additional smaller grants came from various organizations, including Stirling Council, the South Central NHS Trust, and the Scottish Government.

Tesla's reliance on government subsidies contrasts with Elon Musk’s frequent criticism of government spending. Notably, Musk was appointed by then-US President-elect Donald Trump to co-lead a “Department of Government Efficiency,” aimed at reducing the size of federal agencies. Musk, who drastically cut staffing levels at X (formerly Twitter) after acquiring the platform, has argued for reducing the 428 US federal agencies to just 99.

Meanwhile, Tesla has faced challenges with its product lineup. Last week, the company reported its first annual decline in vehicle deliveries, struggling to meet demand despite incentives. Several quarterly delivery targets were missed in 2024, highlighting growing pressure on the company.

Musk, one of the world’s richest individuals, has also come under scrutiny for his recent inflammatory comments. Using X, he has attacked political figures, including UK Labour leader Keir Starmer, over a grooming scandal. On Monday, the UK Prime Minister criticized Musk for spreading misinformation, accusing him of amplifying far-right rhetoric.

Tesla has yet to comment on these developments.

This funding revelation comes amidst broader concerns about Musk’s business practices and public behavior, as Tesla navigates declining demand and political controversy.

NHS to Launch World-First AI Tool to Predict Type 2 Diabetes Risk

Thursday, December 26, 2024 / No Comments

 

Computer monitors in the operating theatreThe NHS in England is set to launch a groundbreaking trial of an artificial intelligence (AI) tool designed to predict the risk of type 2 diabetes up to 13 years before it develops. The innovative technology, known as Aire-DM, analyzes electrocardiogram (ECG) readings during routine heart scans to identify subtle changes that signal an increased risk of diabetes. These patterns are often too small for the human eye to detect but reflect early impacts of diabetes on the heart.

The trial, starting in 2025 at two London hospital trusts—Imperial College Healthcare NHS Trust and Chelsea and Westminster Hospital NHS Foundation Trust—marks a world-first in healthcare. Aire-DM, developed using data from 1.2 million ECGs and validated through the UK Biobank, has shown an accuracy rate of approximately 70% in predicting diabetes risk. When combined with clinical and genetic information, the tool's precision improves further.

Type 2 diabetes, a condition affecting over 500 million people worldwide, is linked to severe health complications such as heart disease, kidney failure, and blindness. Early detection through AI could enable timely interventions, including lifestyle changes, to prevent or delay the onset of the disease. If successful, Aire-DM could be rolled out across England's NHS and potentially globally, revolutionizing preventive healthcare.

Jaguar Type 00 Design Vision Concept Unveiled

Tuesday, December 3, 2024 / No Comments

 

Jaguar Type 00 DesignJaguar has unveiled its Type 00 “Design Vision” concept, signaling a bold reinvention of the brand. This concept car introduces a new era of ultra-luxurious, all-electric vehicles and embodies Jaguar’s refreshed design philosophy, termed “Exuberant Modernism.” It features striking proportions, angular styling, and distinctive design elements like the omission of a traditional rear window, a feature reminiscent of the Polestar 4. The rear design includes horizontal slats that might conceal taillights, emphasizing futuristic aesthetics​

The concept serves as a precursor to three production EVs planned for launch by 2030, targeting a more exclusive, high-end market akin to Bentley’s territory. This shift is part of a broader rebranding effort that also introduces a minimalist new logo and graphic identity aimed at reconnecting with Jaguar’s legacy while appealing to contemporary luxury consumers​

The Type 00 and its upcoming production counterparts reflect Jaguar’s aspiration to regain relevance in the competitive luxury EV market, leveraging bold design and cutting-edge technology. A full unveiling of the concept is tied to Miami Art Week, highlighting the artistic approach to the brand’s relaunch.

COF-999: Transforming Carbon Capture Technology

Sunday, December 1, 2024 / No Comments

 

carbon capturing powderScientists at UC Berkeley have developed COF-999, a groundbreaking covalent organic framework that could significantly advance carbon-capture technologies. This highly porous powder is engineered to selectively bind carbon dioxide (CO₂) molecules from the air using amine-functionalized pores, achieving remarkable efficiency at room temperature. Unlike many existing materials, COF-999 does not require intensive heating to function; it releases captured CO₂ at just 60°C (140°F), which translates to substantial energy savings. Furthermore, the powder can be reused at least 100 times without losing its effectiveness, making it a durable and sustainable solution.

The material captures CO₂ from the air up to 10 times faster than current methods, addressing a critical need for scalable carbon-removal solutions. The potential applications include reducing atmospheric CO₂ levels directly or integrating into industrial processes such as cement and plastic production. Moreover, the simplicity of its composition suggests a pathway to lower production costs, an essential factor for achieving global scalability.

If successfully commercialized, COF-999 could transform the economics of direct air capture (DAC), helping lower costs from the current $600–$1,000 per ton to the sub-$200 range necessary for widespread adoption. While further testing and optimization are needed, the innovation represents a significant step toward addressing climate change by enhancing the efficiency and feasibility of capturing and storing greenhouse gases.

Injectable Gel: A New Frontier in Cancer Immunotherapy

Friday, November 29, 2024 / No Comments

ct imaging
Researchers from MIT and Massachusetts General Hospital (MGH) have developed a novel platform that could revolutionize cancer immunotherapy by delivering treatments directly into tumors in a more efficient, targeted, and lasting way. This innovation involves a thermosensitive gel made from safe, biocompatible polymers that solidify inside tumors, enabling a controlled release of FDA-approved immunotherapies like imiquimod over several days.

 Tested in mouse models for colon and breast cancer, the treatment achieved significant results when combined with checkpoint blockade therapy, showing potential for inducing the abscopal effect—where both treated and untreated tumors regress.

The gel’s design incorporates imaging agents for accurate delivery using CT or ultrasound, addressing a critical challenge in intratumoral immunotherapy. This approach reduces the need for repeated injections, cutting costs and enhancing feasibility in clinical settings. Future developments may adapt this platform to treat other tumor types or carry additional therapies, accelerating its path to FDA approval due to its use of existing drugs and materials

New AI tool generates realistic satellite images of future flooding

Thursday, November 28, 2024 / No Comments

 

satellite imageResearchers at MIT have developed a cutting-edge AI tool capable of generating highly realistic satellite images that simulate the impact of future flooding events. By combining generative AI with a physics-based flood model, the system creates satellite-like imagery depicting how specific areas might look after a storm or hurricane. This innovation was tested using data from Hurricane Harvey in Houston, producing accurate representations of flood extents.

The tool aims to improve disaster preparedness by offering a more tangible and emotionally engaging visualization compared to traditional color-coded flood maps. These satellite-like images could encourage timely evacuation and better resource allocation by showing hyper-localized flooding scenarios, making the risks more relatable and actionable for residents and policymakers.

To ensure accuracy and trustworthiness, the model integrates real-world physical parameters such as storm trajectories and flood patterns, minimizing "hallucinations" (errors where the AI might generate unrealistic flood zones). While promising, the system is still in development and requires training with additional data to adapt to various regions.

This AI tool could be a game-changer for disaster management, aiding communities in visualizing and preparing for climate impacts effectively. It’s part of the broader "Earth Intelligence Engine" project by MIT, aimed at making scientific climate data more accessible and practical

Scientists recreate mouse from gene older than animal life

Tuesday, November 19, 2024 / No Comments

Scientists have achieved an extraordinary breakthrough by creating a mouse using a gene from choanoflagellates, unicellular organisms that share a common ancestor with animals. This research, conducted by Queen Mary University of London and The University of Hong Kong, demonstrates how ancient genetic tools from single-celled organisms can be utilized to understand stem cell evolution and animal development.

The key innovation lies in the Sox gene, known for driving pluripotency—the ability of cells to develop into any type of tissue. Researchers extracted this gene from choanoflagellates and introduced it into mouse stem cells. These modified cells were then used to produce a living mouse, showcasing how genes pivotal to stem cell function existed even before multicellular life evolved.

Choanoflagellates are the closest known relatives of animals, making them crucial for understanding the evolutionary leap from single-celled organisms to complex life forms. The study underscores the deep evolutionary connections shared by all life on Earth and highlights the potential of ancient genetic tools in modern science.

This research not only expands our understanding of stem cell origins but also opens new doors for advancements in biotechnology and regenerative medicine.

MIT engineers make converting CO2 into useful products more practical

Monday, November 18, 2024 / No Comments

 

MIT engineers have developed a scalable and efficient method to convert carbon dioxide (CO2) into useful products like ethylene, a key ingredient in plastics. This innovation addresses challenges in the electrochemical process that transforms CO2, such as balancing the material's conductivity and water-repelling properties. The team used a Teflon-like plastic (PTFE) enhanced with woven copper wires to create a gas diffusion electrode that combines excellent conductivity with hydrophobicity.

This design improves efficiency and scalability, enabling the production of larger electrodes needed for industrial applications. By dividing the material into smaller subsections through the copper wires, the system mimics the high performance of smaller electrodes. The approach also allows integration with existing manufacturing processes, paving the way for scaling up CO2 conversion technology to address global emissions effectively.

This breakthrough offers a significant step toward sustainable solutions for utilizing excess CO2 while producing valuable industrial materials. The research was supported by Shell and the MIT Energy Initiative and conducted using MIT.nano facilities

Scientists Create Photonic Time Crystals That Amplify Light Exponentially

Saturday, November 16, 2024 / No Comments

 

Scientists have achieved a groundbreaking advancement by creating photonic time crystals that amplify light exponentially. These materials, which possess unique time-varying properties, could revolutionize technologies related to light amplification, such as lasers, sensors, and optical computing. Unlike conventional crystals, which have spatial patterns, photonic time crystals operate by modulating their properties over time, allowing for precise control of light-matter interactions.

The key innovation here is the use of tiny silicon spheres to create resonant conditions, allowing these time crystals to work with existing optical materials and techniques. The crystals can exponentially amplify light signals, enabling applications that require ultra-sensitive detection, such as medical diagnostics, including the detection of viruses, cancer biomarkers, and other diseases. This is possible because these time crystals enhance the light emitted by small particles, improving the sensitivity of current technologies.

One of the most exciting aspects of this development is its potential to bring photonic time crystals from microwave frequencies to the visible light spectrum. This breakthrough could have profound implications for various fields, including communications, imaging, and scientific research, where precise control of light is essential. The research represents a significant leap forward in the realm of photonics, and scientists are optimistic about its future applications in advanced technology and medicine

Startup Lumicell Revolutionizes Breast Cancer Surgery with Real-Time Tissue Imaging

Monday, November 11, 2024 / No Comments

 

A new technology developed by the startup Lumicell, an MIT spinout, is providing surgeons with a real-time, in-depth view of breast cancer tissue during surgery, enhancing the precision and effectiveness of breast cancer procedures.

 By using a handheld scanner in combination with an optical imaging agent, the device allows surgeons to immediately visualize residual cancer cells in the surgical cavity, ensuring more complete tumor removal. This innovation helps minimize the likelihood of leaving behind cancerous tissue, which could otherwise lead to follow-up surgeries.

The technology integrates advanced imaging techniques with AI algorithms, enabling surgeons to assess tumor margins in real-time, as opposed to the current standard where pathology results take days. With this immediate feedback, surgeons can make more informed decisions during the operation, potentially reducing recurrence rates and improving patient outcomes.

 If widely adopted, Lumicell's approach could transform the standard of care by making surgeries more targeted, reducing the need for repeat procedures, and improving recovery times. The FDA's recent approval of Lumicell’s technology marks a significant step forward in personalized and precise cancer care​

How AI is Helping California Prevent Wildfires Before They Start

Tuesday, November 5, 2024 / No Comments


 California has adopted advanced AI technology to identify potential wildfires before they ignite, aiming to combat the increasing threat posed by these natural disasters. The state is leveraging a network of cameras, satellites, and sensors combined with machine learning algorithms to monitor vast, fire-prone areas for early signs of trouble.

These AI systems analyze real-time data feeds from thousands of cameras positioned throughout the region and use pattern recognition to detect subtle changes in the environment, such as smoke or other early indicators of fire. When suspicious activity is spotted, the AI can alert fire response teams almost instantly, enabling them to take preventive measures before a small spark escalates into a large-scale wildfire.

AI also plays a role in predictive modeling, using historical data, weather patterns, and vegetation analysis to forecast where wildfires are most likely to occur. This helps in preemptively directing resources, such as clearing brush or positioning firefighting crews strategically, to areas at high risk.


The use of AI in wildfire detection offers significant benefits, including faster response times and more efficient allocation of firefighting resources. However, it also comes with challenges, such as ensuring the accuracy of AI predictions and managing the vast amounts of data collected.


Overall, California’s deployment of AI technology is part of a broader initiative to mitigate the devastating impact of wildfires and safeguard communities from the increasing frequency and severity of these events.

Wearable Devices for Neurons: Probing Brain Function and Restoration

Saturday, November 2, 2024 / No Comments


 MIT Scientists have developed innovative "wearable" devices that can wrap around neurons, offering new possibilities for probing and interacting with subcellular regions of the brain. These microscopic devices are designed to conform tightly to individual neurons, allowing for high-precision measurements and interactions at the cellular level. The concept is similar to wearable technology for humans but scaled down to interact directly with cells.

The primary applications of these neuronal "wearables" include detailed mapping of electrical and chemical signals in subcellular areas, which could provide deeper insights into how the brain functions at the most intricate levels. By accessing and monitoring these tiny regions, researchers can better understand processes like signal transmission and synaptic activity. This could lead to breakthroughs in understanding neurological diseases and disorders.

Moreover, there is potential for these devices to be used in therapeutic applications. For example, they could be engineered to deliver electrical stimulation or drugs directly to specific parts of the brain, possibly aiding in the restoration of lost brain functions or modifying neuronal activity to address disorders such as epilepsy or Parkinson's disease.


This new approach marks a significant step in neurotechnology, merging micro-engineering and neuroscience to create tools that are more integrated with biological structures than ever before.

Elon Musk Predicts 10 Billion Humanoid Robots by 2040 Priced at $20K-$25K Each

Wednesday, October 30, 2024 / No Comments

 

Tesla CEO Elon Musk predicted on Tuesday that humanoid robots could outnumber humans within the next 20 years, projecting that advancements in robotics will drive widespread adoption. Musk shared that his company is working to turn this vision into reality, aiming to make humanoid robots accessible and scalable at costs between $20,000 and $25,000 each

Elon Musk projected on Tuesday at the Future Investment Initiative in Saudi Arabia that humanoid robots may surpass the human population by 2040. Musk envisions about 10 billion robots globally, enabled by advancements that could bring the cost down to between $20,000 and $25,000 for a "robot that can do anything." This aligns closely with Tesla's Optimus robot pricing, which Musk anticipates could reach $20,000 to $30,000 in the long term with mass production.

The Tesla Optimus project began in 2021 and, despite a rocky start with a human in a robot costume, has shown incremental progress. At Tesla's recent “We, Robot” event, Optimus units performed tasks such as handing out drinks and interacting with guests, though some actions were teleoperated to enhance performance. Tesla’s Optimus lead Milan Kovac confirmed that about 20 robots were active during the event, with minor incidents, including a robot fall.

Currently, two Optimus robots work on the factory floor, though Tesla has not specified their roles. Musk projected limited production to begin next year, targeting thousands of robots in Tesla facilities by 2025 and mass production by 2026, ultimately aiming for Optimus to be Tesla’s largest product line and potentially pushing Tesla's valuation to $25 trillion.

Tesla faces competition from companies like Figure AI, Apptronik, Toyota Research Institute, and Boston Dynamics, which are also investing heavily in humanoid robot technology.

Elon Musk Unveils Tesla Cybercab: A Fully Autonomous Robotaxi

Tuesday, October 29, 2024 / No Comments

 

Elon Musk Reveals Tesla Cybercab Robotaxi, Promises Sub-$30,000 Autonomous Car by 2027 and a 20-Passenger 'Robovan

Tesla CEO Elon Musk has introduced the Cybercab, the company’s highly anticipated robotaxi, setting its price at under $30,000. Musk also announced Tesla's intention to launch autonomous driving capabilities for its Model 3 and Model Y vehicles in California and Texas by next year.

The unveiling took place at the We, Robot event at Warner Bros. Studios in Burbank, California. Musk arrived in the Cybercab, donning his signature black leather jacket and accompanied by a man dressed as an astronaut. Human-like robots entertained the crowd, dancing and serving drinks to attendees, adding a futuristic touch to the celebration.

Prior to Tesla’s announcement, many analysts remained skeptical about the company’s ability to deliver on its long-standing promise of fully self-driving vehicles. Tesla’s robotaxi vision has been in the pipeline for nearly five years, with autonomous driving features teased for almost a decade.

At the We, Robot event, Musk revealed that 20 additional Cybercabs were present, along with 50 fully autonomous vehicles available for test drives across the 20-acre venue. He highlighted the Cybercab’s revolutionary design, featuring neither a steering wheel nor pedals and utilizing inductive charging instead of a plug.

Musk also noted that Tesla had “overspecced” the computer in each vehicle, employing an Amazon Web Services-like approach that allows computational power to be distributed across its vehicle network, enhancing efficiency and functionality.



Musk announced that Tesla expects the Cybercab to cost under $30,000 (approximately £22,980 or A$44,500). He projected the robotaxi to be in production "in 2026" before pausing and amending his estimate to “before 2027,” acknowledging his tendency toward optimistic timelines.

Envisioning a future transformed by autonomous vehicles, Musk described a world where parking lots could be repurposed as parks, and passengers could relax, sleep, or watch movies in a “comfortable little lounge” during their trips. He noted that Cybercabs could serve as Uber-like taxis when not in use by their owners and even suggested that people could operate fleets of these vehicles, creating ride-share networks akin to a “shepherd with a flock of cars.”

“It’s going to be a glorious future,” he declared.

Tesla’s Model 3 and Model Y vehicles are set to transition from supervised to fully unsupervised self-driving, starting in California and Texas next year, with expansion planned across the U.S. and globally as regulatory approvals permit. While the S and X models will also gain autonomous capabilities, Musk did not specify a timeline for these.

“With autonomy, you get your time back. It’ll save lives, a lot of lives, and prevent injuries,” Musk emphasized, citing Tesla’s extensive driving data collected from millions of vehicles as a key factor in making autonomous driving safer than human drivers.

“With that amount of training data, it’s obviously going to be much better than a human can be because you can’t live a million lives,” Musk stated. “It doesn’t get tired, and it doesn’t text. It’ll be 10, 20, even 30 times safer than a human.”


Musk also unveiled the “Robovan,” an autonomous van designed to carry up to 20 passengers and cargo, though he did not disclose pricing or a production timeline. In addition, he highlighted significant progress on Tesla’s humanoid robot, Optimus. As the robots moved among attendees to serve drinks, Musk urged, “Please be nice to the Optimus robots.” At the end of the event, several robots danced on a neon-lit stage to Daft Punk's Robot Rock, with Musk estimating a future production cost of around $30,000 per robot.

The event showcased Tesla’s autonomous innovations amid ongoing challenges. The company currently faces a class-action lawsuit in the U.S. from Tesla owners who had been promised full self-driving capabilities that remain undelivered. Following pressure from U.S. safety regulators in February last year, Tesla issued a recall to address software allowing speeding and other violations in its full self-driving mode. In April, regulators launched an investigation into whether Tesla’s full self-driving and autopilot systems were sufficiently ensuring that drivers remained attentive, prompted by reports of 20 crashes involving autopilot since the initial recall.


Groundbreaking Achievement: High-Performance Computing Analyzes Quantum Photonics on a Large Scale for the First Time

Monday, October 28, 2024 / No Comments

Scientists at Paderborn University have successfully utilized high-performance computing, represented by their supercomputer Noctua, to conduct a large-scale analysis of a quantum photonics experiment for the first time.

Researchers at Paderborn University in Germany have developed high-performance computing (HPC) software capable of analyzing and describing the quantum states of a photonic quantum detector.

HPC utilizes advanced classical computers to handle large datasets, conduct complex calculations, and swiftly tackle challenging problems. However, many classical computational methods cannot be directly applied to quantum applications. This new study indicates that HPC may offer valuable tools for quantum tomography, the technique employed to ascertain the quantum state of a quantum system.

In their study, the researchers state, “By developing customized open-source algorithms using high-performance computing, we have performed quantum tomography on a photonic quantum detector at a mega-scale.”

HPC enables mega-scale quantum tomography

A quantum photonic detector is a sophisticated instrument designed to detect and measure individual light particles (photons). Highly sensitive, it can collect detailed information about various properties of photons, including their energy levels and polarization. This data is invaluable for quantum research, experiments, and technologies.

Accurately determining the quantum state of the photonic detector is crucial for achieving precise measurements. However, the process of performing quantum tomography on such an advanced tool requires handling large volumes of data.

This is where the newly developed HPC software comes into play. To showcase its capabilities, the researchers stated, “We performed quantum tomography on a megascale quantum photonic detector covering a Hilbert space of 10610^6.”

Hilbert space is a mathematical concept that describes a multi-dimensional space where each point represents a possible state of a quantum system. It includes an inner product for calculating distances and angles between states, which is essential for understanding concepts such as probability and superposition. These spaces can possess infinite dimensions, representing a wide array of potential states.

With the HPC software, the researchers successfully “completed calculations that described the quantum photonic detector within a few minutes—faster than anyone else before,” they added.

Classical Computing Breakthroughs Spark New Advances in Quantum Technology

HPC is not just limited to determining the state of the quantum photonic detector. By leveraging the inherent structure of quantum tomography, the researchers were able to enhance the efficiency of the process.

This optimization enables them to manage and reconstruct quantum systems with up to 101210^{12} elements. “This demonstrates the unprecedented extent to which this tool can be applied to quantum photonic systems,” said Timon Schapeler, the first author of the study and a research scientist at Paderborn University.

“As far as we know, our work is the first contribution in the field of classical high-performance computing that facilitates experimental quantum photonics on a large scale,” Schapeler added.

The HPC-driven quantum tomography approach holds promise for advancing more efficient data processing, quantum measurement, and communication technologies in the future.

The study is published in the journal Quantum Science and Technology.


The Future of Connectivity: 6G Networks Expected to Outpace 5G by 9,000 Times

Saturday, October 26, 2024 / No Comments

 

Next-generation phone networks could significantly surpass current ones
due to a novel method for transmitting multiple data streams across a
wide range of frequencies.

Wireless data has been transmitted at a remarkable speed of 938 gigabits per second, which is over 9,000 times faster than the average speed of a current 5G phone connection. This speed is equivalent to downloading more than 20 average-length movies each second and sets a new record for multiplex data, where two or more signals are combined.

The high demand for wireless signals at large events such as concerts, sports games, and busy train stations often causes mobile networks to slow down significantly. This issue primarily arises from the limited bandwidth available to 5G networks. The portion of the electromagnetic spectrum allocated for 5G varies by country, typically operating at relatively low frequencies below 6 gigahertz and only within narrow frequency bands.

To enhance transmission rates, Zhixin Liu from University College London and his team have utilized a broader range of frequencies than any previous experiments, spanning from 5 gigahertz to 150 gigahertz, employing both radio waves and light.

Liu explains that while digital-to-analog converters are currently used to transmit zeros and ones as radio waves, they face challenges at higher frequencies. His team applied this technology to the lower portion of the frequency range and employed a different technique using lasers for the higher frequencies. By combining both methods, they created a wide data band that could be integrated into next-generation smartphones.

This innovative approach enabled the team to transmit data at 938 Gb/s, which is over 9,000 times faster than the average download speed of 5G in the UK. This capability could provide individuals with incredibly high data rates for applications that are yet to be imagined, and ensure that large groups of people can access sufficient bandwidth for streaming video.

While this achievement sets a record for multiplex data, single signals have been transmitted at even higher speeds, surpassing 1 terabit per second.

Liu likens splitting signals across wide frequency ranges to transforming the “narrow, congested roads” of current 5G networks into “10-lane motorways.” He notes, “Just like with traffic, wider roads are necessary to accommodate more cars.”

Liu mentions that his team is currently in discussions with smartphone manufacturers and network operators, expressing hope that future 6G technology will build on this work, although other competing approaches are also being developed.

Robotics: Flying a robot through a virtual reality helmet

Wednesday, May 6, 2015 / No Comments

Visiting a museum or monument, enjoy a sunset at the end of the world, all without leaving your home: this is what promises the platform developed at the University of Pennsylvania (USA) through its Remote mobile robot. With this system, a person with an Oculus Rift virtual reality headset can direct the camera filming the scene with a camera. The pilot thus has a subjective view immersive.

In the near future, the combined advances in robotics and virtual reality will result in multiple applications and novel applications. We think of extra-bodily experiences such as the lunar rover developed by students and researchers from Carnegie Mellon University in the US, and which aims to make available to the public exploration of the Moon person view .

In a more down to earth perspective, the combination of a virtual reality headset and a remote-controlled robot could help rescue teams assess with greater precision the conditions in areas difficult to access or hazardous, not to mention many military applications. And why not imagine that one day visit a museum or a monument to discover the other side of the world as if you were there?

It is precisely this kind of immersion that has sought to reproduce a team from the University of Pennsylvania (USA) with its Project Dora (Dexterous Roving Observational Automaton in English) which presents itself as a "teleoperated robotic platform immersive ". It is an associate Oculus Rift virtual reality headset to a remote-controlled robot whose head is equipped with two video cameras.

The person wearing the helmet somehow sees through the eyes of the robot, as if she was there. If this kind of technique is not new in itself, innovation lies in the freedom of movement offered by Dora. The system tracks the precise movements of the head in six degrees of freedom. The objective is to reach a level of immersion as the person has the impression of actually being there. The demonstration video published on Vimeo, shows that human and machine are virtually body.

Faithfully reproduce the movements of the head

The Oculus helmet detects both the orientation of the head with its inertial and position using infrared beacons. The information is transmitted to Arduino microcontrollers and Edison (manufactured by Intel) through which the robot reproduces the movements of the head. The cameras filmed the craft with a resolution of 976 x 582 pixels at 30 frames per second. The Oculus helmet could however support a higher quality.

The main technical challenge that had to be overcome was to minimize the latency that occurs between the time the person moves his head and when the video display helmet renders this action. Meanwhile, several steps must be conducted in a very short time: the computer receives information about the movement, treatment, capture of the corresponding video image and return to Oculus helmet. Currently, the system lags Dora latency of about 70 milliseconds. According Oculus, the acceptable minimum to guarantee the immersion and realism of virtual reality is 60 milliseconds. The difference is not huge, knowing that, in the case of Dora project, designers must, in addition, to deal with the speed of the wireless connection between the headset and the robot as well as the friction of moving parts . However, they believe they can optimize the system to reduce the gap.

Currently, the wireless connection between the operator and the robot is via radio link with a range up to 7 km. For commercial use, the system should be based on Wi-Fi networks or 4G cell type whose performance should be sufficient to avoid excessive latency, which is not necessarily easy. For now, Dora is primarily a proof of concept and the project team has not decided on a possible commercial project.

Record: Autonomous Car Delphi travels 5471 km

Wednesday, April 8, 2015 / No Comments

Delphi, previously unknown manufacturer of electronic components to the general public, achieved the longest journey ever undertaken autonomous driving in North America. Throughout the crossing of the United States of 5471 km, the Audi SQ5 has adapted in real conditions to different situations (weather, highway exits, diversions, works etc.).

As it had announced, Delphi Automotive PLC completed a US crossing autonomous car. On March 22, a specially equipped Audi SQ5 is part of San Francisco to New York on a journey of 5471 km. Equipped with a multitude of radar sensors, cameras and microprocessors, it carried 99% of the way without human assistance. Based in the UK, Delphi already developing components for autonomous driving systems. This experience has enabled it to assess the extent of his expertise in a variety of driving conditions. The experience is recounted in a press release and on the Delphi Drive web page.

From month to month, car manufacturers and other brands multiply their ads on innovation in the autonomous car industry. At the New York Auto Show last Thursday, the CEO of Nissan, Carlos Ghosn, has promised that autonomous cars would reach Japanese roads by the end of 2016 and they would be able to navigate on highways as on urban roads without help a human operator before 2020.

Six experts accompanied the Autonomous Audi

Given this context, it is surprising that Delphi is the first to cross the US from coast to coast. His journey is the longest ever done in autonomous driving on US roads. For nine days the SQ5 in question has crossed 15 states and, as expected, met many potentially difficult situations: bad weather, aggressive surrounding drivers deviations for work ... so that a human operator can understand and respond to these conditions, a computer system potentially more difficult to interpret.


Six Delphi experts accompanied the Autonomous Audi, either inside the vehicle or in a second car to receive and analyze the data produced by the sensors and systems responsible for autonomous driving. The trip has generated more than 2 terabytes of data on the capabilities of the car, including the automatic parking, highway driving, lane change, the outputs from the highway and city driving. "The performance of our car during this trip were outstanding and they have exceeded our expectations, observes Jeff Owens, chief technology officer at Delphi. The intelligence gained through this trip will help us optimize our existing security products and accelerate the development of future products. »