↩ Accueil

Vue normale

Reçu aujourd’hui — 19 décembre 2025 6.5 📰 Sciences English

Middle East Space Conference 2026 Returns to Muscat as a Strategic Platform for Regional Growth

19 décembre 2025 à 16:37
Novaspace logo

Paris, December 2025 — The Ministry of Transport, Communications and Information Technology of the Sultanate of Oman (MTCIT) and Novaspace announce the second Edition of the Middle East Space Conference […]

The post Middle East Space Conference 2026 Returns to Muscat as a Strategic Platform for Regional Growth appeared first on SpaceNews.

The hidden backbone of space security: how to keep satellites safe through proper logistics

19 décembre 2025 à 15:00
NG-2 rollout

The modern space economy is increasingly powered by dual-use satellites that support both civilian services and national security needs. These assets deliver critical capabilities, from communications to Earth observation, but they also face growing risks. Protecting them, along with the intellectual property they carry, requires an integrated end-to-end approach that connects logistics, compliance and mission […]

The post The hidden backbone of space security: how to keep satellites safe through proper logistics appeared first on SpaceNews.

Real-world quantum entanglement is far from an unlimited resource

19 décembre 2025 à 13:00

Achieving a profound understanding of any subject is hard. When that subject is quantum mechanics, it’s even harder. And when one departs from ideal theoretical scenarios and enters the real world of experimental limitations, it becomes more challenging still – yet that is what physicists at the Freie Universität Berlin (FU-Berlin), Germany recently did by exploring what happens to entanglement theory in real quantum computers. In doing so, they created a bridge between two fields that have so far largely developed in parallel: entanglement theory (rooted in physics) and computational complexity (rooted in computer science).

Ebits, the standard currency of entanglement

In quantum mechanics, a composite system is said to be entangled when its total wavefunction cannot be written as a product of the states of its individual subsystems. This leads to correlations between subsystems that arise from the structure of the quantum state, not from any shared classical information. Many speed-ups achieved in quantum computing, quantum cryptography and quantum metrology rely heavily on entanglement, but not every form of entanglement is equally useful. Only specific kinds of entanglement will enable a given computational or communication task.

To make quantum technologies practical, the available entangled resources must therefore often be converted into forms suitable for specific applications. One major conversion process involves transforming partially entangled states into, or extracting them from, the maximally entangled bit (ebit) that acts as the standard unit of entanglement. High-fidelity ebits – entangled pairs that are extremely close to the ideal perfectly entangled state – can be distilled from noisy or imperfect entangled states through entanglement distillation, while entanglement dilution allows one to reconstruct the desired entangled states from purified ebits.

In an idealized setting, with an infinite number of copies of entangled states and unlimited computational power, a single quantity called the von Neumann entropy fully determines how many ebits can be extracted or are required. But reality is far less forgiving: we never have infinite resources, and computational power is always limited, just like we don’t have an infinite amount of gold on Earth.

Entanglement under finite resources

In the present work, which is published in Nature Physics, the FU-Berlin team of Lorenzo Leone, Jacopo Rizzo, Jens Eisert and Sofiene Jerbi asked what happens when these ideal assumptions break down. They study the case where only a finite number of entangled states, which can scale at most polynomially with the number of quantum bits (qubits) in the system, are considered and all local operations and classical communication (LOCC) are performed in a finite polynomial time.

They found that the simple correspondence between von Neumann entropy and extractable or required ebits no longer holds: even when a state has a large von Neumann entropy, the number of ebits that can be efficiently extracted may be much lower. In these cases, the number is bounded instead by the min-entropy of the reduced state (an operational measure determined solely by the state’s largest eigenvalue that captures how much entanglement can be reliably distilled from a single copy of the state) without averaging over many uses. On the other hand, even a state with negligible von Neumann entanglement may require a maximal ebit budget for efficient dilution.

Leone and Eisert say they were inspired to perform this study by recent work on so-called pseudo-entangled states, which are states that look at lot more entangled than they are for computationally bounded observers. Their construction of pseudo-entangled states highlights a dramatic worst-case scenario: a state that appears almost unentangled by conventional measures may still require a large number of ebits to create it efficiently. The takeaway is that computability matters, and quantum resources you might have thought were available may be, in effect, locked away simply because they cannot be processed efficiently. In other words, practical limitations make the line between a “resource” and a “usable resource” even sharper.

Quantum resources in a limited world

The researchers say that their study raises multiple questions for future exploration. One such question concerns whether a similar computational‐efficiency gap exists for other quantum resources such as magic and coherence. Another is whether one can build a full resource theory with complexity constraints, where quantities reflect not just what can be converted, but how efficient that conversion is.

Regardless of the answers, the era of entanglement under infinite book‐keeping is giving way to an era of entanglement under limited books, limited clocks and limited gates. And in this more realistic space, quantum technologies may still shine, but the calculus of what can be done and what can be harnessed needs a serious retooling.

The post Real-world quantum entanglement is far from an unlimited resource appeared first on Physics World.

Hybrid deep-learning model eases brachytherapy planning

19 décembre 2025 à 10:30
CT scan slices and target contours
CTV segmentation test Target contouring in two example slices of a patient’s CT scan, using BCTVNet and 12 comparison models. Red and green contours represent the ground truth and the model predictions, respectively. Each image is annotated with the corresponding Dice similarity coefficient. (Courtesy: CC BY 4.0/Mach. Learn.: Sci. Technol. 10.1088/2632-2153/ae2233

Brachytherapy – a cancer treatment that destroys tumours using small radioactive sources implanted inside the body – plays a critical role in treating cervical cancer, offering an important option for patients with inoperable locally advanced disease. Brachytherapy can deliver high radiation doses directly to the tumour while ensuring nearby healthy tissues receive minimal dose; but its effectiveness relies on accurate delineation of the treatment target. A research team in China is using a hybrid deep-learning model to help with this task.

Planning brachytherapy treatments requires accurate contouring of the clinical target volume (CTV) on a CT scan, a task that’s traditionally performed manually. The limited soft-tissue contrast of CT, however, can result in unclear target boundaries, while applicator or needle insertion (used to deliver the radioactive sources) can deform and displace nearby organs. This makes manual contouring a time-consuming and subjective task that requires a high level of operator expertise.

Automating this process could reduce reliance on operator experience, increase workflow efficiency and improve contouring consistency. With this aim, the research team – headed up by He Ma from Northeastern University and Lin Zhang from Shanghai University of International Business and Economics – developed a 3D hybrid neural network called BCTVNet.

Currently, most brachytherapy segmentation models are based on convolutional neural networks (CNNs). CNNs effectively capture local structural features and can model fine anatomical details but struggle with long-range dependencies, which can cause problems if the target extends across multiple CT slices. Another option is to use transformer-based models that can integrate spatial information across distant regions and slices; but these are less effective at capturing fine-grained local detail.

To combine the strengths of both, BCTVNet integrates CNN with transformer branches to provide strong local detail extraction along with global information integration. BCTVNet performs 3D segmentation directly on post-insertion CT images, enabling the CTV to be defined based on the actual treatment geometry.

Model comparisons

Zhang, Ma and colleagues assessed the performance of BCTVNet using a private CT dataset from 95 patients diagnosed with locally advanced cervical cancer and treated with CT-guided 3D brachytherapy (76 in the training set, 19 in the test set). The scans had an average of 96 slices per patient and a slice thickness of 3 mm.

CT scans used to plan cervical cancer brachytherapy often exhibit unclear target boundaries. To enhance the local soft-tissue contrast and improve boundary recognition, the researchers pre-processed the CT volumes with a 3D version of the CLAHE (contrast-limited adaptive histogram equalization) algorithm, which processes the entire CT scan as a volumetric input. They then normalized the intensity values to standardize the input for the segmentation models.

The researchers compared BCTVNet with 12 popular CNN- and transformer-based segmentation models, evaluating segmentation performance via a series of metrics, including Dice similarity coefficient (DSC), Jaccard index, Hausdorff distance 95th percentile (HD95) and average surface distance.

Contours generated by BCTVNet were closest to the ground truth, reaching a DSC of 83.24% and a HD95 (maximum distance from ground truth excluding the worst 5%) of 3.53 mm. BCTVNet consistently outperformed the other models across all evaluation metrics. It also demonstrated strong classification accuracy, with a precision of 82.10% and a recall of 85.84%, implying fewer false detections and successful capture of target regions.

To evaluate the model’s generalizability, the team conducted additional experiments on the public dataset SegTHOR, which contains 60 thoracic 3D CT scans (40 for training, 20 for testing) from patients with oesophageal cancer. Here again, BCTVNet achieved the best scores among all the segmentation models, with the highest average DSC of 87.09% and the lowest average HD95 of 7.39 mm.

“BCTVNet effectively overcomes key challenges in CTV segmentation and achieves superior performance compared to existing methods,” the team concludes. “The proposed approach provides an effective and reliable solution for automatic CTV delineation and can serve as a supportive tool in clinical workflows.”

The researchers report their findings in Machine Learning: Science and Technology.

The post Hybrid deep-learning model eases brachytherapy planning appeared first on Physics World.

What problem is charging for Space Situational Awareness supposed to solve?

19 décembre 2025 à 00:15

A recently issued Executive Order revises how the government implements Space Policy Directive-3, removing the longstanding expectation that basic space situational awareness (SSA) services, including conjunction warnings, would be provided without charge. This decision marks a departure not only from SPD-3, but from more than a decade of United States practice in which Congress and […]

The post What problem is charging for Space Situational Awareness supposed to solve? appeared first on SpaceNews.

Reçu hier — 18 décembre 2025 6.5 📰 Sciences English

Congress’ SBIR standoff is slowing Space Force innovation — it must act now

18 décembre 2025 à 17:00

At a time when space is unmistakably a contested warfighting domain, the United States risks slowing its own progress not because of a lack of technology or talent, but because Congress has failed to act on renewing authority for critical small business innovation funding. Senior Space Force acquisition officials have publicly warned that the lapse […]

The post Congress’ SBIR standoff is slowing Space Force innovation — it must act now appeared first on SpaceNews.

❌