Firmware 8B2QJXD7 pour le SSD Samsung 990 PRO [MAJ]


![]()
22,68 Go de RAM utilisée sans rien faire ou presque?!?
macOS 26.1 est arrivé le 3 novembre 2025, il y a près d'un mois.
Il a des bugs, des Memory Leak: de la mémoire se perd petit à petit lors de son usage, mémoire allouée et jamais récupérée pour d'autres usages quand inutilisée.
Le premier effet est de réduire la RAM disponible pour les applications lancées.
Le second est de réduire la taille du cache "disque", puisque cette mémoire non libérée, ralentissant les accès aux stockages.
Le troisième est de copier ces zones mémoires inutilisées sur le stockage interne "SSD", diminuant l'espace libre disponible.
Constatant une occupation mémoire quasi illicite, mes 32 Go étant squattés avec plus de 10 Go inutiles, je me suis plongé sur ce problème de macOS 26.1 (version finale).
En redémarrant Chrome, j'ai regagné 5 Go de RAM, je me suis dit que Chrome pouvait être le coupable, il restait pourtant une énorme quantité de RAM gâchée.
Et mes 6 onglets réouverts et pourtant chacun économisant plus de 800 Mo de RAM...
Chrome coupable ou pas? Ça faisait près de 1 Go par onglet ouvert sur YouTube. Étrange!
![]()
Moniteur d'activité consomme 2,91 Go de bonne et chère RAM, 90€
Mais que voilà?!?
Le Moniteur d'Activité consommerait autour de 3 Go de RAM??? 90€ les 2.9 Go chez Apple.
Ça n'est pas le code de Google Chrome là, mais le code d'Apple, il consomme 75.6 Mo de RAM sur mon Mac du travail sous macOS 15.7.2 qui fait bien plus de choses, et où seul Docker consomme près de 3 Go!
3 Go de RAM juste pour afficher une liste de processus et leur consommation de ressources?!?
90 € ttc au prix où Apple facture la RAM???
Il y a clairement des Memory Leak, sur du code 100% Apple, je vais suivre ça, mais moi je ne passerais pas de Mac stratégique à macOS 26 avant macOS 26.3 ou 26.4 !


Higher levels of carbon dioxide (CO2) in the Earth’s atmosphere could harm radio communications by enhancing a disruptive effect in the ionosphere. According to researchers at Kyushu University, Japan, who modelled the effect numerically for the first time, this little-known consequence of climate change could have significant impacts on shortwave radio systems such as those employed in broadcasting, air traffic control and navigation.
“While increasing CO2 levels in the atmosphere warm the Earth’s surface, they actually cool the ionosphere,” explains study leader Huixin Liu of Kyushu’s Faculty of Science. “This cooling doesn’t mean it is all good: it decreases the air density in the ionosphere and accelerates wind circulation. These changes affect the orbits and lifespan of satellites and space debris and also disrupt radio communications through localized small-scale plasma irregularities.”
One such irregularity is a dense but transient layer of metal ions that forms between 90‒120 km above the Earth’s surface. This sporadic E-layer (Es), as it is known, is roughly 1‒5 km thick and can stretch from tens to hundreds of kilometres in the horizontal direction. Its density is highest during the day, and it peaks around the time of the summer solstice.
The formation of the Es is hard to predict, and the mechanisms behind it are not fully understood. However, the prevailing “wind shear” theory suggests that vertical shears in horizontal winds, combined with the Earth’s magnetic field, cause metallic ions such as Fe+, Na+ and Ca+ to converge in the ionospheric dynamo region and form thin layers of enhanced ionization. The ions themselves largely come from metals in meteoroids that enter the Earth’s atmosphere and disintegrate at altitudes of around 80‒100 km.
While previous research has shown that increases in CO2 trigger atmospheric changes on a global scale, relatively little is known about how these increases affect smaller-scale ionospheric phenomena like the Es. In the new work, which is published in Geophysical Research Letters, Liu and colleagues used a whole-atmosphere model to simulate the upper atmosphere at two different CO2 concentrations: 315 ppm and 667 ppm.
“The 315 ppm represents the CO2 concentration in 1958, the year in which recordings started at the Mauna Loa observatory, Hawaii,” Liu explains. “The 667 ppm represents the projected CO2 concentration for the year 2100, based on a conservative assumption that the increase in CO2 is constant at a rate of around 2.5 ppm/year since 1958.”
The researchers then evaluated how these different CO2 levels influence a phenomenon known as vertical ion convergence (VIC) which, according to the wind shear theory, drives the Es. The simulations revealed that the higher the atmospheric CO2 levels, the greater the VIC at altitudes of 100–120 km. “What is more, this increase is accompanied by the VIC hotspots shifting downwards by approximately 5 km,” says Liu. “The VIC patterns also change dramatically during the day and these diurnal variability patterns continue into the night.”
According to the researchers, the physical mechanism underlying these changes depends on two factors. The first is reduced collisions between metallic ions and the neutral atmosphere as a direct result of cooling in the ionosphere. The second is changes in the zonal wind shear, which are likely caused by long-term trends in atmosphere tides.
“These results are exciting because they show that the impacts of CO2 increase can extend all the way from Earth’s surface to altitudes at which HF and VHF radio waves propagate and communications satellites orbit,” Liu tells Physics World. “This may be good news for ham radio amateurs, as you will likely receive more signals from faraway countries more often. For radio communications, however, especially at HF and VHF frequencies employed for aviation, ships and rescue operations, it means more noise and frequent disruption in communication and hence safety. The telecommunications industry might therefore need to adjust their frequencies or facility design in the future.”
The post Extra carbon in the atmosphere may disrupt radio communications appeared first on Physics World.

Structural colours – created using nanostructures that scatter and reflect specific wavelengths of light – offer a non-toxic, fade-resistant and environmentally friendly alternative to chemical dyes. Large-scale production of structural colour-based materials, however, has been hindered by fabrication challenges and a lack of effective tuning mechanisms.
In a step towards commercial viability, a team at the University of Central Florida has used vanadium dioxide (VO2) – a material with temperature-sensitive optical and structural properties – to generate tunable structural colour on both rigid and flexible surfaces, without requiring complex nanofabrication.
Senior author Debashis Chanda and colleagues created their structural colour platform by stacking a thin layer of VO2 on top of a thick, reflective layer of aluminium to form a tunable thin-film cavity. At specific combinations of VO2 grain size and layer thickness this structure strongly absorbs certain frequency bands of visible light, producing the appearance of vivid colours.
The key enabler of this approach is the fact that at a critical transition temperature, VO2 reversibly switches from insulator to metal, accompanied by a change in its crystalline structure. This phase change alters the interference conditions in the thin-film cavity, varying the reflectance spectra and changing the perceived colour. Controlling the thickness of the VO2 layer enables the generation of a wide range of structural colours.
The bilayer structures are grown via a combination of magnetron sputtering and electron-beam deposition, techniques compatible with large-scale production. By adjusting the growth parameters during fabrication, the researchers could broaden the colour palette and control the temperature at which the phase transition occurs. To expand the available colour range further, they added a third ultrathin layer of high-refractive index titanium dioxide on top of the bilayer.
The researchers describe a range of applications for their flexible coloration platform, including a colour-tunable maple leaf pattern, a thermal sensing label on a coffee cup and tunable structural coloration on flexible fabrics. They also demonstrated its use on complex shapes, such as a toy gecko with a flexible tunable colour coating and an embedded heater.
“These preliminary demonstrations validate the feasibility of developing thermally responsive sensors, reconfigurable displays and dynamic colouration devices, paving the way for innovative solutions across fields such as wearable electronic, cosmetics, smart textiles and defence technologies,” the team concludes.
The research is described in Proceedings of the National Academy of Sciences.
The post Phase-changing material generates vivid tunable colours appeared first on Physics World.

Susumu Noda of Kyoto University has won the 2026 Rank Prize for Optoelectronics for the development of the Photonic Crystal Surface Emitting Laser (PCSEL). For more than 25 years, Noda developed this new form of laser, which has potential applications in high-precision manufacturing as well as in LIDAR technologies.
Following the development of the laser in 1960, in more recent decades optical fibre lasers and semiconductor lasers have become competing technologies.
A semiconductor laser works by pumping an electrical current into a region where an n-doped (excess of electrons) and a p-doped (excess of “holes”) semiconductor material meet, causing electrons and holes to combine and release photons.
Semiconductors have several advantages in terms of their compactness, high “wallplug” efficiency, and ruggedness, but lack in other areas such as having a low brightness and functionality.
This means that conventional semiconductor lasers required external optical and mechanical elements to improve their performance, which results in large and impractical systems.
In the late 1990s, Noda began working on a new type of semiconductor laser that could challenge the performance of optical fibre lasers. These so-called PCSELs employ a photonic crystal layer in between the semiconductor layers. Photonic crystals are nanostructured materials in which a periodic variation of the dielectric constant — formed, for example, by a lattice of holes — creates a photonic band-gap.
Noda and his research made a series of breakthrough in the technology such as demonstrating control of polarization and beam shape by tailoring the phonic crystal structure and expansion into blue–violet wavelengths.
The resulting PCSELs emit a high-quality, symmetric beam with narrow divergence and boast high brightness and high functionality while maintaining the benefits of conventional semiconductor lasers. In 2013, 0.2 W PCSELs became available and a few years later Watt-class PCSEL lasers became operational.
Noda says that it is “a great honour and a surprise” to receive the prize. “I am extremely happy to know that more than 25 years of research on photonic-crystal surface-emitting lasers has been recognized in this way,” he adds. “I do hope to continue to further develop the research and its social implementation.”
Susumu Noda received his BSc and then PhD in electronics from Kyoto University in 1982 and 1991, respectively. From 1984 he also worked at Mitsubishi Electric Corporation, before joining Kyoto University in 1988 where he is currently based.
Founded in 1972 by the British industrialist and philanthropist Lord J Arthur Rank, the Rank Prize is awarded biennially in nutrition and optoelectronics. The 2026 Rank Prize for Optoelectronics, which has a cash award of £100 000, will be awarded formally at an event held in June.
The post Semiconductor laser pioneer Susumu Noda wins 2026 Rank Prize for Optoelectronics appeared first on Physics World.

En 2006, plus de 400 000 Canadiens vivaient avec la maladie d’Alzheimer ou d’autres formes de démence. Alors que les traitements médicaux demeuraient limités, une approche non pharmacologique suscitait de l’espoir : la musicothérapie. Sans guérir la maladie, la musique offre un soutien réel. Son utilisation régulière peut réduire le stress, la paranoïa, la confusion et l’agitation. Plusieurs études indiquent aussi une amélioration de la mobilité et de l’expression des émotions. En activant certaines zones du cerveau, la musique crée un espace de réconfort et contribue à transformer le quotidien des patients. Source : Découverte, 26 mars 2006 Journaliste : Michel Rochon Présentateur : Charles Tisseyre


As a theoretical and mathematical physicist at Imperial College London, UK, Bhavin Khatri spent years using statistical physics to understand how organisms evolve. Then the COVID-19 pandemic struck, and like many other scientists, he began searching for ways to apply his skills to the crisis. This led him to realize that the equations he was using to study evolution could be repurposed to model the spread of the virus – and, crucially, to understand how it could be curtailed.
In a paper published in EPL, Khatri models the spread of a SARS-CoV-2-like virus using branching process theory, which he’d previously used to study how advantageous alleles (variations in a genetic sequence) become more prevalent in a population. He then uses this model to assess the duration that interventions such as lockdowns would need to be applied in order to completely eliminate infections, with the strength of the intervention measured in terms of the number of people each infected person goes on to infect (the virus’ effective reproduction number, R).
Tantalizingly, the paper concludes that applying such interventions worldwide in June 2020 could have eliminated the COVID virus by January 2021, several months before the widespread availability of vaccines reduced its impact on healthcare systems and led governments to lift restrictions on social contact. Physics World spoke to Khatri to learn more about his research and its implications for future pandemics.
One important finding is that we can accurately calculate the distribution of times required for a virus to become extinct by making a relatively simple approximation. This approximation amounts to assuming that people have relatively little population-level “herd” immunity to the virus – exactly the situation that many countries, including the UK, faced in March 2020.
Making this approximation meant I could reduce the three coupled differential equations of the well-known SIR model (which models pandemics via the interplay between Susceptible, Infected and Recovered individuals) to a single differential equation for the number of infected individuals in the population. This single equation turned out to be the same one that physics students learn when studying radioactive decay. I then used the discrete stochastic version of exponential decay and standard approaches in branching process theory to calculate the distribution of extinction times.

Alongside the formal theory, I also used my experience in population genetic theory to develop an intuitive approach for calculating the mean of this extinction time distribution. In population genetics, when a mutation is sufficiently rare, changes in its number of copies in the population are dominated by randomness. This is true even if the mutation has a large selective advantage: it has to grow by chance to sufficient critical size – on the order of 1/(selection strength) – for selection to take hold.
The same logic works in reverse when applied to a declining number of infections. Initially, they will decline deterministically, but once they go below a threshold number of individuals, changes in infection numbers become random. Using the properties of such random walks, I calculated an expression for the threshold number and the mean duration of the stochastic phase. These agree well with the formal branching process calculation.
In practical terms, the main result of this theoretical work is to show that for sufficiently strong lockdowns (where, on average, only one of every two infected individuals goes on to infect another person, R=0.5), this distribution of extinction times was narrow enough to ensure that the COVID pandemic virus would have gone extinct in a matter of months, or at most a year.
Leaving politics and the likelihood of social acceptance aside for the moment, if a sufficiently strong lockdown could have been maintained for a period of roughly six months across the globe, then I am confident that the virus could have been reduced to very low levels, or even made extinct.
The question then is: is this a stable situation? From the perspective of a single nation, if the rest of the world still has infections, then that nation either needs to maintain its lockdown or be prepared to re-impose it if there are new imported cases. From a global perspective, a COVID-free world should be a stable state, unless an animal reservoir of infections causes re-infections in humans.

As for the practical success of such a strategy, that depends on politics and the willingness of individuals to remain in lockdown. Clearly, this is not in the model. One thing I do discuss, though, is that this strategy becomes far more difficult once more infectious variants of SARS-CoV-2 evolve. However, the problem I was working on before this one (which I eventually published in PNAS) concerned the probability of evolutionary rescue or resistance, and that work suggests that evolution of new COVID variants reduces significantly when there are fewer infections. So an elimination strategy should also be more robust against the evolution of new variants.
I’d like them to conclude that pandemics with similar properties are, in principle, controllable to small levels of infection – or complete extinction – on timescales of months, not years, and that controlling them minimizes the chance of new variants evolving. So, although the question of the political and social will to enact such an elimination strategy is not in the scope of the paper, I think if epidemiologists, policy experts, politicians and the public understood that lockdowns have a finite time horizon, then it is more likely that this strategy could be adopted in the future.
I should also say that my work makes no comment on the social harms of lockdowns, which shouldn’t be minimized and would need to be weighed against the potential benefits.
I think the most interesting next avenue will be to develop theory that lets us better understand the stability of the extinct state at the national and global level, under various assumptions about declining infections in other countries that adopted different strategies and the role of an animal reservoir.
It would also be interesting to explore the role of “superspreaders”, or infected individuals who infect many other people. There’s evidence that many infections spread primarily through relatively few superspreaders, and heuristic arguments suggest that taking this into account would decrease the time to extinction compared to the estimates in this paper.
I’ve also had a long-term interest in understanding the evolution of viruses from the lens of what are known as genotype phenotype maps, where we consider the non-trivial and often redundant mapping from genetic sequences to function, where the role of stochasticity in evolution can be described by statistical physics analogies. For the evolution of the antibodies that help us avoid virus antigens, this would be a driven system, and theories of non-equilibrium statistical physics could play a role in answering questions about the evolution of new variants.
The post Staying the course with lockdowns could end future pandemics in months appeared first on Physics World.
Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.
When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.
Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.
So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).
Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.
Examples from the world of business include the following:
Business sales: 80% of a company’s revenue might come from 20% of its customers.
Company productivity: 80% of your results may come from 20% of your daily tasks.
Software development: 80% of bugs could be caused by 20% of the code.
Quality control: 20% of defects may cause 80% of customer complaints.
Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.
Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.
Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.
Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it.
Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.
A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.
Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.
Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.
Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.
So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.
The post When is good enough ‘good enough’? appeared first on Physics World.

New high-precision laser spectroscopy measurements on thorium-229 nuclei could shed more light on the fine structure constant, which determines the strength of the electromagnetic interaction, say physicists at TU Wien in Austria.
The electromagnetic interaction is one of the four known fundamental forces in nature, with the others being gravity and the strong and weak nuclear forces. Each of these fundamental forces has an interaction constant that describes its strength in comparison with the others. The fine structure constant, α, has a value of approximately 1/137. If it had any other value, charged particles would behave differently, chemical bonding would manifest in another way and light-matter interactions as we know them would not be the same.
“As the name ‘constant’ implies, we assume that these forces are universal and have the same values at all times and everywhere in the universe,” explains study leader Thorsten Schumm from the Institute of Atomic and Subatomic Physics at TU Wien. “However, many modern theories, especially those concerning the nature of dark matter, predict small and slow fluctuations in these constants. Demonstrating a non-constant fine-structure constant would shatter our current understanding of nature, but to do this, we need to be able to measure changes in this constant with extreme precision.”
With thorium spectroscopy, he says, we now have a very sensitive tool to search for such variations.
The new work builds on a project that led, last year, to the worlds’s first nuclear clock, and is based on precisely determining how the thorium-229 (229Th) nucleus changes shape when one of its neutrons transitions from a ground state to a higher-energy state. “When excited, the 229Th nucleus becomes slightly more elliptic,” Schumm explains. “Although this shape change is small (at the 2% level), it dramatically shifts the contributions of the Coulomb interactions (the repulsion between protons in the nucleus) to the nuclear quantum states.”
The result is a change in the geometry of the 229Th nucleus’ electric field, to a degree that depends very sensitively on the value of the fine structure constant. By precisely observing this thorium transition, it is therefore possible to measure whether the fine-structure constant is actually a constant or whether it varies slightly.
After making crystals of 229Th doped in a CaF2 matrix at TU Wien, the researchers performed the next phase of the experiment in a JILA laboratory at the University of Colorado, Boulder, US, firing ultrashort laser pulses at the crystals. While they did not measure any changes in the fine structure constant, they did succeed in determining how such changes, if they exist, would translate into modifications to the energy of the first nuclear excited state of 229Th.
“It turns out that this change is huge, a factor 6000 larger than in any atomic or molecular system, thanks to the high energy governing the processes inside nuclei,” Schumm says. “This means that we are by a factor of 6000 more sensitive to fine structure variations than previous measurements.”
Researchers in the field have debated the likelihood of such an “enhancement factor” for decades, and theoretical predictions of its value have varied between zero and 10 000. “Having confirmed such a high enhancement factor will now allow us to trigger a ‘hunt’ for the observation of fine structure variations using our approach,” Schumm says.
Andrea Caputo of CERN’s theoretical physics department, who was not involved in this work, calls the experimental result “truly remarkable”, as it probes nuclear structure with a precision that has never been achieved before. However, he adds that the theoretical framework is still lacking. “In a recent work published shortly before this work, my collaborators and I showed that the nuclear-clock enhancement factor K is still subject to substantial theoretical uncertainties,” Caputo says. “Much progress is therefore still required on the theory side to model the nuclear structure reliably.”
Schumm and colleagues are now working on increasing the spectroscopic accuracy of their 229Th transition measurement by another one to two orders of magnitude. “We will then start hunting for fluctuations in the transition energy,” he reveals, “tracing it over time and – through the Earth’s movement around the Sun – space.
The present work is detailed in Nature Communications.
The post Looking for inconsistencies in the fine structure constant appeared first on Physics World.

Le Tineco Pure One A90S a été officiellement annoncé le 7 novembre dernier. Le modèle avait également été aperçu plus tôt dans l’année lors de l’IFA 2025 à Berlin, où la marque avait présenté ses nouveautés en matière d’aspiration intelligente. Cette double présence — salon technologique majeur et annonce officielle — a marqué l’arrivée du A90S comme l’un des aspirateurs balais les plus innovants de la nouvelle génération.
Officiellement disponible à la vente, vous le retrouverez à l’heure où ses lignes sont écrites en promotion au prix de 599 €. En temps normal, il est affiché à 699 €. Place au test !
Ici, nous aurons très vite fait le tour de notre première paetie. Nous retrouverons à gauche et à droite le nom de la marque, celui du modèle, et la mention « Cordless Stick Vaccum Cleaner », tandis qu’à l’avant, nous aurons la même chose avec cette fois, un dessin dudit modèle du jour. On regrettera tout de même deux petites poignées pour porter le tout car c’est tout de même assez lourd.
| Marque | Tineco |
| Caractéristique spéciale | Aspiration puissante de 270AW, brosse maîtresse 3DSense, conception ZeroTangle, jusqu’à 105 min d’autonomie, système SmartLift |
| Type de filtre | Filtre HEPA |
| Composants inclus | Buse FlexiSoft, mini brosse motorisée, Outil 2-en-1 pour fentes et dépoussiérage, support de charge et de rangement |
| Sans fil ? | Oui |
| Puissance | 650 Watts |
| Facteur de forme | Bâton |
| Couleur | Gris Gunmetal |
| Nom de modèle | Pure One A90S |
| Dimensions du produit | 37L x 29,5l x 122,5H centimètres |
Tineco est devenu en quelques années l’un des acteurs incontournables du nettoyage dit smart. D’ailleurs, vous pouvez retrouver tous nos tests de la marque ici. Avec le Pure One A90S, la marque revient avec un aspirateur balai plus ambitieux, plus autonome et plus polyvalent, pensé pour simplifier le ménage sans sacrifier la performance. Sur le papier, c’est un modèle qui se place clairement dans le haut de gamme. Mais comme toujours chez Vonguru, on a voulu vérifier s’il tient réellement ses promesses.
Sans surprise, Tineco continue d’affiner son identité. Le Pure One A90S reprend les lignes élégantes et modernes des modèles de la marque, avec un châssis gris métallisé, un écran circulaire parfaitement intégré dans la poignée, et une impression générale de produit bien fini. L’appareil est agréable à manipuler : équilibré, pas trop lourd, et surtout très maniable, ce qui est crucial lorsque l’on passe des sols durs aux plafonds en quelques instants. Les multiples accessoires fournis confirment la volonté de polyvalence de la marque pour son nouveau modèle : brosse motorisée, suceur long, embout textiles, mini-brosse anti-acariens… tout y est, et rien ne fait cheap. Oui, tout respire la qualité.
Ce qui fait la force des aspirateurs Tineco, c’est le système iLoop, capable d’analyser en temps réel la quantité de saleté pour ajuster automatiquement la puissance. Sur le A90S, cette technologie est encore plus réactive et mieux calibrée. Dans la pratique, on ne se soucie plus du mode manuel : sur carrelage, il reste discret et économise la batterie et sur tapis ou lors d’un amas de miettes, il envoie beaucoup plus fort, instantanément.
L’écran LED, qui change de couleur selon la quantité de saleté détectée, n’est pas qu’un gadget : il permet d’identifier les zones encore sales et de savoir quand insister. En mode Auto, le Tineco Pure One A90S dépasse facilement les 50 minutes d’utilisation, ce qui est largement suffisant pour un appartement ou une petite maison. Le système iLoop optimise tellement la consommation que l’on n’a quasiment jamais à utiliser le mode Max.
Le changement de batterie, ultra simple, permet même de doubler l’autonomie si l’on investit dans une seconde batterie. Le bac à poussière se vide très facilement, sans projections, et le système de filtration HEPA se démonte en quelques secondes. On apprécie la brosse anti-emmêlement, particulièrement efficace : nettement moins de cheveux enroulés, ce qui est un confort au quotidien.
Les LEDs vertes grand-angle de 150° révèlent clairement les particules de poussière et à chaque passage je me fais la même réflexion : « Mon dieu mais comment ça peut être sale à ce point ?? ». En sachant que je passe l’aspirateur plusieurs fois par jour…. Ces LEDs sont un must have, comme sur notre Dreame H15 Mix, et on adore.
On regrettera cependant la taille du bac, que l’on aura à vider assez régulièrement et la base murale qui me semble tout de même un peu perfectible. Sur les aspirateurs balai puissants, le bruit est souvent le point faible. Ici, Tineco surprend agréablement : même lors des pics de puissance, l’aspirateur reste raisonnablement silencieux. En mode Auto, il devient même discret. On appréciera également son manche pliable, très pratique pour passer sous certains meubles difficiles d’accès.
Le Pure One A90S est un aspirateur-balai premium qui assume pleinement son statut. Il offre une très bonne puissance, une aspiration intelligemment optimisée, une autonomie solide et un confort d’utilisation qui fait vraiment la différence à l’utilisations. Quelques détails comme la taille du bac ou la base murale perfectible peuvent encore être améliorés, mais l’ensemble est cohérent, moderne et très efficace.
Au final, le Tineco Pure One A90S est un excellent choix pour ceux qui veulent un aspirateur intelligent, et réellement performant au quotidien. Cependant, son prix n’est clairement pas à la portée de toutes les bourses. Pour rappel, vous le retrouverez à l’heure où ses lignes sont écrites en promotion au prix de 599 €. En temps normal, il est affiché à 699 €.
Test – Aspirateur balai Tineco Pure One A90S a lire sur Vonguru.
![]()
Intel Fab52 avec son process 18A
Une rumeur prétend que Apple pourrait devenir un client des fonderies Intel pour faire fabriquer de futurs SoC Mx pour ses Mac ou Ax pour ses iPhone.
Les fonderies d'Intel sont larguées face à TSMC, Intel lui-même faisant fabriquer ses haut-de-gammes par ce dernier. Ils ne pourraient que fabriquer des générations antérieures pour Apple.
Ça me parait possible, mais pas pour les raisons avancées ici ou là...
La première raison évoquée serait de s'assurer de pouvoir fabriquer des SoC Mx ou Ax en cas de problème entre Taïwan dont TSMC et la Chine. Avec soit la production soit l'expédition de SoC affectées.
Si ça arrivait, je peux vous promettre que les fonderies d'Intel ne servirait pas à fabriquer des SoC pour Apple mais évidemment des processeurs pour le DoD et tous les secteurs stratégiques aux USA.
En fait c'est justement car les USA ont besoin de garder une capacité souveraine de fabrication de puces (CPU, GPU, RAM, SoC, etc.) que les fonderies Intel doivent exister, comme seconde-source pour la fabrication de puces n'ayant pas besoin des derniers process.
Sauf que tant que ça n'arrive pas, il faut les faire tourner et les rentabiliser!
L'Administration Américaine actuelle voudrait que Apple fasse de l'assemblage final aux USA de ses produits comme les Mac ou les iPhone. C'est difficilement possible rapidement.
En revanche Intel a besoin de clients pour ses fonderies, qui sont énormément subventionnées mais ça ne suffit pas.
Je pense que la négociation a tourné autour de l'idée de produire d'anciens SoC Apple chez Intel, en échange d'abandonner l'idée de réaliser les assemblages finaux aux USA.
Mais il faut bien avoir en tête que Intel ne sera pas capable de produire des M5 avant au moins un ou deux ans, et qu'ils ne pourront viser que des M3 voire M2 dans le court terme. Ou similaire coté iPhone.
Contrairement à ce que j'ai lu ailleurs, un M4 nécessite le même exact process de fabrication qu'un M4 Max. Ou sinon on obtient un produit consommant bien plus, plus lent, ou les deux!
En ce moment, profitez du Black Friday : soutenez MacBidouille et partagez vos trouvailles
Selon une étude britannique, l'adolescence dure en moyenne jusqu'à l'âge de 32 ans.
