↩ Accueil

Vue normale

Reçu aujourd’hui — 26 décembre 2025

Claude Code Safety Net - Le plugin qui empêche l'IA de tout niquer

Par :Korben
26 décembre 2025 à 10:30

Vous utilisez Claude Code comme moi pour bosser plus vite sur vos projets de dev ? Hé bien j'espère que vous n'avez jamais eu la mauvaise surprise de voir l'agent lancer un petit rm -rf ~/ qui détruit tout votre répertoire home en 2 secondes. Parce que oui, ça arrive malheureusement, et plusieurs devs en ont fait les frais cette année...

Le problème c'est que les agents IA, aussi intelligents soient-ils, peuvent manquer de garde-fous sur ce qui est vraiment dangereux. Vous leur dites "nettoie le projet" et hop, ils interprètent ça un peu trop littéralement et une fois que c'est fait, y'a plus qu'à pleurer devant son terminal vide.

C'est pour ça qu'un développeur du nom de kenryu42 a créé Claude Code Safety Net qui est un plugin pour Claude Code qui agit comme un garde-fou mécanique. Son idée c'est de bloquer les commandes destructives AVANT qu'elles ne s'exécutent, et pas juste avec des règles bêtes genre "si la commande commence par rm -rf".

Le plugin est bien plus malin que ça puisqu'il fait une analyse sémantique des commandes. Il comprend la différence entre git checkout -b nouvelle-branche (qui est safe, ça crée juste une branche) et git checkout -- . qui lui va dégager tous vos changements non committés sur les fichiers suivis. Les deux commencent pareil, mais l'une vous sauve et l'autre vous ruine psychologiquement, vous forçant à vous réfugier dans la cocaïne et la prostitution.

Et c'est pareil pour les force push. Le plugin bloque git push --force qui peut écraser l'historique distant et rendre la récupération très difficile, mais il laisse passer git push --force-with-lease qui est la version plus sûre, car elle vérifie que la ref distante correspond à ce qu'on attend (même si ce n'est pas une garantie absolue).

Et le truc vraiment bien foutu, c'est qu'il détecte aussi les commandes planquées dans des wrappers shell. Vous savez, le genre de piège où quelqu'un écrit sh -c "rm -rf /" pour bypass les protections basiques. Le plugin parse récursivement et repère la commande dangereuse à l'intérieur. Il fait même la chasse aux one-liners Python, Ruby ou Node qui pourraient faire des dégâts.

Côté rm -rf, le comportement par défaut est plutôt permissif mais intelligent... les suppressions dans /tmp ou dans le dossier de travail courant sont autorisées parce que c'est souvent légitime, par contre, tenter de nuke votre home ou des dossiers système, c'est non négociable.

Et pour les paranos (comme moi), y'a un mode strict qu'on active avec SAFETY_NET_STRICT=1. Dans ce mode, toute commande non parseable est bloquée par défaut, et les rm -rf même dans le projet courant demandent validation. Mieux vaut prévenir que pleurer.

Si ça vous chauffe, l'installation se fait via le système de plugins de Claude Code avec deux commandes :

/plugin marketplace add kenryu42/cc-marketplace
/plugin install safety-net@cc-marketplace

Et hop, vous redémarrez Claude Code et c'est opérationnel.

Ensuite, quand le plugin bloque une commande, il affiche un message explicite genre "BLOCKED by safety_net.py - Reason: git checkout -- discards uncommitted changes permanently" donc vous savez exactement pourquoi ça a été refusé et vous pouvez décider en connaissance de cause si vous voulez vraiment le faire.

Bref, j'ai testé ce plugin sur mes projets et c'est vraiment cool alors si vous utilisez Claude Code en mode YOLO, ça vous évitera de rejoindre le club des devs qui ont tout perdu à cause d'un agent trop zélé...

Claude Code Safety Net - Le plugin qui empêche l'IA de tout niquer

Par :Korben
26 décembre 2025 à 10:30

Vous utilisez Claude Code comme moi pour bosser plus vite sur vos projets de dev ? Hé bien j'espère que vous n'avez jamais eu la mauvaise surprise de voir l'agent lancer un petit rm -rf ~/ qui détruit tout votre répertoire home en 2 secondes. Parce que oui, ça arrive malheureusement, et plusieurs devs en ont fait les frais cette année...

Le problème c'est que les agents IA, aussi intelligents soient-ils, peuvent manquer de garde-fous sur ce qui est vraiment dangereux. Vous leur dites "nettoie le projet" et hop, ils interprètent ça un peu trop littéralement et une fois que c'est fait, y'a plus qu'à pleurer devant son terminal vide.

C'est pour ça qu'un développeur du nom de kenryu42 a créé Claude Code Safety Net qui est un plugin pour Claude Code qui agit comme un garde-fou mécanique. Son idée c'est de bloquer les commandes destructives AVANT qu'elles ne s'exécutent, et pas juste avec des règles bêtes genre "si la commande commence par rm -rf".

Le plugin est bien plus malin que ça puisqu'il fait une analyse sémantique des commandes. Il comprend la différence entre git checkout -b nouvelle-branche (qui est safe, ça crée juste une branche) et git checkout -- . qui lui va dégager tous vos changements non committés sur les fichiers suivis. Les deux commencent pareil, mais l'une vous sauve et l'autre vous ruine psychologiquement, vous forçant à vous réfugier dans la cocaïne et la prostitution.

Et c'est pareil pour les force push. Le plugin bloque git push --force qui peut écraser l'historique distant et rendre la récupération très difficile, mais il laisse passer git push --force-with-lease qui est la version plus sûre, car elle vérifie que la ref distante correspond à ce qu'on attend (même si ce n'est pas une garantie absolue).

Et le truc vraiment bien foutu, c'est qu'il détecte aussi les commandes planquées dans des wrappers shell. Vous savez, le genre de piège où quelqu'un écrit sh -c "rm -rf /" pour bypass les protections basiques. Le plugin parse récursivement et repère la commande dangereuse à l'intérieur. Il fait même la chasse aux one-liners Python, Ruby ou Node qui pourraient faire des dégâts.

Côté rm -rf, le comportement par défaut est plutôt permissif mais intelligent... les suppressions dans /tmp ou dans le dossier de travail courant sont autorisées parce que c'est souvent légitime, par contre, tenter de nuke votre home ou des dossiers système, c'est non négociable.

Et pour les paranos (comme moi), y'a un mode strict qu'on active avec SAFETY_NET_STRICT=1. Dans ce mode, toute commande non parseable est bloquée par défaut, et les rm -rf même dans le projet courant demandent validation. Mieux vaut prévenir que pleurer.

Si ça vous chauffe, l'installation se fait via le système de plugins de Claude Code avec deux commandes :

/plugin marketplace add kenryu42/cc-marketplace
/plugin install safety-net@cc-marketplace

Et hop, vous redémarrez Claude Code et c'est opérationnel.

Ensuite, quand le plugin bloque une commande, il affiche un message explicite genre "BLOCKED by safety_net.py - Reason: git checkout -- discards uncommitted changes permanently" donc vous savez exactement pourquoi ça a été refusé et vous pouvez décider en connaissance de cause si vous voulez vraiment le faire.

Bref, j'ai testé ce plugin sur mes projets et c'est vraiment cool alors si vous utilisez Claude Code en mode YOLO, ça vous évitera de rejoindre le club des devs qui ont tout perdu à cause d'un agent trop zélé...

Quand X récompense ceux qui propagent de fausses infos après un attentat

Par :Korben
26 décembre 2025 à 09:20

Pour ceux qui auraient raté l'info, deux terroristes ont ouvert le feu le 14 décembre dernier, lors d'une célébration de Hanoukka à Bondi Beach (Sydney en Australie), tuant 15 personnes. Et dans les minutes qui ont suivi, X s'est transformé, comme à son habitude, en machine à désinformation...

Un homme d'affaires pakistanais portant le même nom que l'un des tireurs (Naveed Akram) s'est alors retrouvé accusé d'être l'auteur de l'attentat. Sa photo a été partagée des milliers de fois, il a reçu des menaces de mort et sa famille a même été harcelée. Sauf que ce gars n'avait strictement rien à voir avec l'attaque, mais partageait juste un nom de famille très courant avec le vrai coupable.

Mais ça, les abrutis de cette planète n'y ont même pas pensé. C'est dire s'ils sont cons...

Après vous allez me dire : « Ouais mais y'a les Community Notes pour corriger ça » sauf que ça marche pas de fou ces notes de la communauté. A titre d'exemple, selon le Center for Countering Digital Hate , 74% de la désinformation liée aux élections américaines de 2024 n'a JAMAIS reçu de note de la communauté. Et quand une note finit par arriver, il faut compter entre 7 et 75 heures selon les cas pour qu'elle soit diffusée.

Donc autant dire une éternité à l'échelle d'Internet...

Et comme si la situation n'était pas encore assez critique, d'après une étude du MIT, les fausses infos se propagent 6 fois plus vite que les vraies sur ces plateformes. Bref, on est foutu face à la connerie humaine.

Surtout que d'après Timothy Graham , chercheur en médias numériques à l'université QUT en Australie, il y a maintenant une économie autour de la désinformation, notamment sur X car leur système de monétisation paie les créateurs en fonction de l'engagement généré par les utilisateurs vérifiés. Ainsi, Plus vos posts font réagir, plus vous gagnez d'argent.

Et devinez quel type de contenu génère le plus d'engagement ?

Hé bien les trucs faux, les trucs scandaleux, les trucs qui font monter les tensions.

Y'a même eu une vidéo de feux d'artifice présentée comme des « célébrations arabes » après l'attentat qui n'était que pure invention. C'était en fait les feux d'artifice de Noël du Rotary Club local, programmés des mois à l'avance. Le truc a fait des millions de vues avant d'être démenti. Certains parmi vous ont peut-être mordu à l'hameçon de cette fake news d'ailleurs.

Pire encore, Grok, l'IA d'Elon Musk intégrée à X, a carrément inventé le nom du héros qui a désarmé l'un des tireurs. Quand les utilisateurs lui demandaient qui avait sauvé des vies, l'IA sortait « Edward Crabtree » de nulle part, un nom totalement fictif tiré d'un site web frauduleux créé le jour même de l'attentat.

Et pendant ce temps, le vrai héros de cette tragédie, Ahmed al-Ahmed, un Australien d'origine syrienne qui a risqué sa vie pour désarmer l'un des tireurs et protéger les victimes, était à peine mentionné. Plus de 2,6 millions de dollars ont été collectés pour lui depuis, mais il a fallu creuser fort pour trouver la vraie histoire pendant que les fake news monopolisaient l'attention.

Le problème c'est que le modèle économique de X encourage les comptes à poster vite et fort, sans vérification. Avoir 5 millions d'impressions et seulement 2000 abonnés, ça permet de monétiser. Et plus on génère de réactions, plus on palpe... Du coup, poster « BREAKING : le tireur identifié » avec une photo d'un random est rentable, même si c'est faux.

Surtout si c'est faux, en fait... Vous savez ce syndrome de "Les merdias mainstream nous cachent des choses, mais heureusement j'ai vu une vérité alternative sur X.com et c'est encore la faute aux zarabes, à l'Europe et aux élites judéo-maçonique-réptiliennes qui veulent manger nos enfants" qui frappe ce genre de personnes dont le cerveau est trop atrophié pour qu'ils puissent développer une réflexion qui leur est propre.

Après, je ne pense pas être naïf, car la désinformation a toujours existé, mais là on parle quand même d'un système de merde qui récompense financièrement ceux qui la propagent. C'est plus un bug, c'est une feature et quand un innocent se fait menacer de mort parce qu'un comploplo en slip dans sa cave a voulu faire du clic, ça me fout les nerfs.

Bref, tant que l'engagement restera la métrique reine et que les plateformes paieront au buzz plutôt qu'à la véracité des faits, on continuera à subir ce genre de dérives horribles...

Quand X récompense ceux qui propagent de fausses infos après un attentat

Par :Korben
26 décembre 2025 à 09:20

Pour ceux qui auraient raté l'info, deux terroristes ont ouvert le feu le 14 décembre dernier, lors d'une célébration de Hanoukka à Bondi Beach (Sydney en Australie), tuant 15 personnes. Et dans les minutes qui ont suivi, X s'est transformé, comme à son habitude, en machine à désinformation...

Un homme d'affaires pakistanais portant le même nom que l'un des tireurs (Naveed Akram) s'est alors retrouvé accusé d'être l'auteur de l'attentat. Sa photo a été partagée des milliers de fois, il a reçu des menaces de mort et sa famille a même été harcelée. Sauf que ce gars n'avait strictement rien à voir avec l'attaque, mais partageait juste un nom de famille très courant avec le vrai coupable.

Mais ça, les abrutis de cette planète n'y ont même pas pensé. C'est dire s'ils sont cons...

Après vous allez me dire : « Ouais mais y'a les Community Notes pour corriger ça » sauf que ça marche pas de fou ces notes de la communauté. A titre d'exemple, selon le Center for Countering Digital Hate , 74% de la désinformation liée aux élections américaines de 2024 n'a JAMAIS reçu de note de la communauté. Et quand une note finit par arriver, il faut compter entre 7 et 75 heures selon les cas pour qu'elle soit diffusée.

Donc autant dire une éternité à l'échelle d'Internet...

Et comme si la situation n'était pas encore assez critique, d'après une étude du MIT, les fausses infos se propagent 6 fois plus vite que les vraies sur ces plateformes. Bref, on est foutu face à la connerie humaine.

Surtout que d'après Timothy Graham , chercheur en médias numériques à l'université QUT en Australie, il y a maintenant une économie autour de la désinformation, notamment sur X car leur système de monétisation paie les créateurs en fonction de l'engagement généré par les utilisateurs vérifiés. Ainsi, Plus vos posts font réagir, plus vous gagnez d'argent.

Et devinez quel type de contenu génère le plus d'engagement ?

Hé bien les trucs faux, les trucs scandaleux, les trucs qui font monter les tensions.

Y'a même eu une vidéo de feux d'artifice présentée comme des « célébrations arabes » après l'attentat qui n'était que pure invention. C'était en fait les feux d'artifice de Noël du Rotary Club local, programmés des mois à l'avance. Le truc a fait des millions de vues avant d'être démenti. Certains parmi vous ont peut-être mordu à l'hameçon de cette fake news d'ailleurs.

Pire encore, Grok, l'IA d'Elon Musk intégrée à X, a carrément inventé le nom du héros qui a désarmé l'un des tireurs. Quand les utilisateurs lui demandaient qui avait sauvé des vies, l'IA sortait « Edward Crabtree » de nulle part, un nom totalement fictif tiré d'un site web frauduleux créé le jour même de l'attentat.

Et pendant ce temps, le vrai héros de cette tragédie, Ahmed al-Ahmed, un Australien d'origine syrienne qui a risqué sa vie pour désarmer l'un des tireurs et protéger les victimes, était à peine mentionné. Plus de 2,6 millions de dollars ont été collectés pour lui depuis, mais il a fallu creuser fort pour trouver la vraie histoire pendant que les fake news monopolisaient l'attention.

Le problème c'est que le modèle économique de X encourage les comptes à poster vite et fort, sans vérification. Avoir 5 millions d'impressions et seulement 2000 abonnés, ça permet de monétiser. Et plus on génère de réactions, plus on palpe... Du coup, poster « BREAKING : le tireur identifié » avec une photo d'un random est rentable, même si c'est faux.

Surtout si c'est faux, en fait... Vous savez ce syndrome de "Les merdias mainstream nous cachent des choses, mais heureusement j'ai vu une vérité alternative sur X.com et c'est encore la faute aux zarabes, à l'Europe et aux élites judéo-maçonique-réptiliennes qui veulent manger nos enfants" qui frappe ce genre de personnes dont le cerveau est trop atrophié pour qu'ils puissent développer une réflexion qui leur est propre.

Après, je ne pense pas être naïf, car la désinformation a toujours existé, mais là on parle quand même d'un système de merde qui récompense financièrement ceux qui la propagent. C'est plus un bug, c'est une feature et quand un innocent se fait menacer de mort parce qu'un comploplo en slip dans sa cave a voulu faire du clic, ça me fout les nerfs.

Bref, tant que l'engagement restera la métrique reine et que les plateformes paieront au buzz plutôt qu'à la véracité des faits, on continuera à subir ce genre de dérives horribles...

Reçu avant avant-hier

Quand les robots humanoïdes se font pirater en 1 minute via Bluetooth

Par :Korben
24 décembre 2025 à 18:28

Vous vous souvenez de ces robots chiens et humanoïdes Unitree qu'on voit partout sur les réseaux depuis quelques mois ? Hé bien des chercheurs en sécurité viennent de découvrir qu'on pouvait les pirater en moins d'une minute, sans même avoir besoin d'un accès internet. Et le pire, c'est que la faille est tellement débile qu'elle en devient presque comique.

Lors de la conférence GEEKCon à Shanghai, l'équipe de DARKNAVY a fait une démonstration qui fait froid dans le dos. L'expert Ku Shipei a pris le contrôle d'un robot humanoïde Unitree G1 (quand même 100 000 yuans, soit environ 14 000 balles) en utilisant uniquement des commandes vocales et une connexion Bluetooth. Après environ une minute de manipulation, l'indicateur lumineux sur la tête du robot est passé du bleu au rouge, il a alors cessé de répondre à son contrôleur officiel, puis sous les ordres de Ku, il s'est précipité vers un journaliste en balançant son poing.

Sympa l'ambiance.

En fait, le problème vient de la façon dont ces robots gèrent leur configuration Wi-Fi via Bluetooth Low Energy (BLE). Quand vous configurez le réseau sur un robot Unitree, il utilise le BLE pour recevoir le nom du réseau et le mot de passe, sauf que ce canal ne filtre absolument pas ce que vous lui envoyez. Vous pouvez donc injecter des commandes directement dans les champs SSID ou mot de passe avec le pattern « ;$(cmd);# », et hop, exécution de code en tant que root.

Et le truc encore plus dingue, c'est que tous les robots Unitree partagent la même clé AES codée en dur pour chiffrer les paquets de contrôle BLE, donc si vous avez cracké un G1, vous avez cracké tous les G1, H1, Go2 et B2 de la planète. Et là vous allez me dire : Et la sécurité du handshake ? Hé bien elle vérifie juste si la chaîne contient « unitree » comme secret. Bravo les gars ^^.

Du coup, la vulnérabilité devient wormable, c'est à dire qu'un robot infecté peut scanner les autres robots Unitree à portée Bluetooth et les compromettre automatiquement à son tour, créant ainsi un botnet de robots qui se propage sans intervention humaine. Imaginez ça dans un entrepôt avec 50 robots !! Le bordel que ça serait...

Moi ce qui m'inquiète avec ces robots, c'est l'architecture d'exfiltration de données car le G1 est équipé de caméras Intel RealSense D435i, de 4 microphones et de systèmes de positionnement qui peuvent capturer des réunions confidentielles, photographier des documents sensibles ou cartographier des locaux sécurisés. Et tout ça peut être streamé vers des serveurs externes sans que vous le sachiez surtout que la télémétrie est transmise en continu vers des serveurs en Chine... Vous voyez le tableau.

En avril 2025 déjà, des chercheurs avaient trouvé une backdoor non documentée dans le robot chien Go1 qui permettait un contrôle à distance via un tunnel réseau et l'accès aux caméras, donc c'est pas vraiment une surprise que les modèles plus récents aient des problèmes similaires, hein ?

J'imagine que certains d'entre vous bidouillent des robots avec Raspberry Pi ou Arduino, alors si vous voulez pas finir avec un robot qui part en freestyle, y'a quelques trucs à faire. Déjà, pour la config Wi-Fi via BLE, ne passez jamais le SSID et le mot de passe en clair mais utilisez un protocole de dérivation de clé comme ECDH pour établir un secret partagé. Et surtout validez et sanitisez toutes les entrées utilisateur avant de les balancer dans un shell.

Et puis changez les clés par défaut, car ça paraît con mais c'est le problème numéro un. Générez des clés uniques par appareil au premier boot ou lors de l'appairage. Vous pouvez stocker ça dans l'EEPROM de l'Arduino ou dans un fichier protégé sur le Pi.

Pensez aussi à isoler vos robots sur un réseau dédié... Si vous utilisez un Pi, créez un VLAN séparé et bloquez tout trafic sortant non autorisé avec iptables. Comme ça, même si un robot est compromis, il ne pourra pas exfiltrer de données ni attaquer d'autres machines.

Ah et désactivez aussi le Bluetooth quand vous n'en avez pas besoin ! Sur un Pi, ajoutez « dtoverlay=disable-bt » dans /boot/config.txt et sur Arduino, c'est encore plus simple, si vous utilisez pas le BLE, ne l'incluez pas dans votre projet.

Bref, ces robots sont de vrais chevaux de Troie ambulants. Ils ont des capteurs, des caméras, des micros, et maintenant ils peuvent être compromis par n'importe qui à portée de Bluetooth... Donc si vous bossez sur des projets robotiques, prenez le temps de sécuriser vos communications sans fil avant de vous retrouver avec un robot qui décide de vous tuer !! Et bookmarkez ce lien car c'est là où je mets toutes mes meilleures news robotiques !

Et si vous êtes encore en train de lire mes articles à cette heure-ci, je vous souhaite un excellent Noël !

Source

Quand les robots humanoïdes se font pirater en 1 minute via Bluetooth

Par :Korben
24 décembre 2025 à 18:28

Vous vous souvenez de ces robots chiens et humanoïdes Unitree qu'on voit partout sur les réseaux depuis quelques mois ? Hé bien des chercheurs en sécurité viennent de découvrir qu'on pouvait les pirater en moins d'une minute, sans même avoir besoin d'un accès internet. Et le pire, c'est que la faille est tellement débile qu'elle en devient presque comique.

Lors de la conférence GEEKCon à Shanghai, l'équipe de DARKNAVY a fait une démonstration qui fait froid dans le dos. L'expert Ku Shipei a pris le contrôle d'un robot humanoïde Unitree G1 (quand même 100 000 yuans, soit environ 14 000 balles) en utilisant uniquement des commandes vocales et une connexion Bluetooth. Après environ une minute de manipulation, l'indicateur lumineux sur la tête du robot est passé du bleu au rouge, il a alors cessé de répondre à son contrôleur officiel, puis sous les ordres de Ku, il s'est précipité vers un journaliste en balançant son poing.

Sympa l'ambiance.

En fait, le problème vient de la façon dont ces robots gèrent leur configuration Wi-Fi via Bluetooth Low Energy (BLE). Quand vous configurez le réseau sur un robot Unitree, il utilise le BLE pour recevoir le nom du réseau et le mot de passe, sauf que ce canal ne filtre absolument pas ce que vous lui envoyez. Vous pouvez donc injecter des commandes directement dans les champs SSID ou mot de passe avec le pattern « ;$(cmd);# », et hop, exécution de code en tant que root.

Et le truc encore plus dingue, c'est que tous les robots Unitree partagent la même clé AES codée en dur pour chiffrer les paquets de contrôle BLE, donc si vous avez cracké un G1, vous avez cracké tous les G1, H1, Go2 et B2 de la planète. Et là vous allez me dire : Et la sécurité du handshake ? Hé bien elle vérifie juste si la chaîne contient « unitree » comme secret. Bravo les gars ^^.

Du coup, la vulnérabilité devient wormable, c'est à dire qu'un robot infecté peut scanner les autres robots Unitree à portée Bluetooth et les compromettre automatiquement à son tour, créant ainsi un botnet de robots qui se propage sans intervention humaine. Imaginez ça dans un entrepôt avec 50 robots !! Le bordel que ça serait...

Moi ce qui m'inquiète avec ces robots, c'est l'architecture d'exfiltration de données car le G1 est équipé de caméras Intel RealSense D435i, de 4 microphones et de systèmes de positionnement qui peuvent capturer des réunions confidentielles, photographier des documents sensibles ou cartographier des locaux sécurisés. Et tout ça peut être streamé vers des serveurs externes sans que vous le sachiez surtout que la télémétrie est transmise en continu vers des serveurs en Chine... Vous voyez le tableau.

En avril 2025 déjà, des chercheurs avaient trouvé une backdoor non documentée dans le robot chien Go1 qui permettait un contrôle à distance via un tunnel réseau et l'accès aux caméras, donc c'est pas vraiment une surprise que les modèles plus récents aient des problèmes similaires, hein ?

J'imagine que certains d'entre vous bidouillent des robots avec Raspberry Pi ou Arduino, alors si vous voulez pas finir avec un robot qui part en freestyle, y'a quelques trucs à faire. Déjà, pour la config Wi-Fi via BLE, ne passez jamais le SSID et le mot de passe en clair mais utilisez un protocole de dérivation de clé comme ECDH pour établir un secret partagé. Et surtout validez et sanitisez toutes les entrées utilisateur avant de les balancer dans un shell.

Et puis changez les clés par défaut, car ça paraît con mais c'est le problème numéro un. Générez des clés uniques par appareil au premier boot ou lors de l'appairage. Vous pouvez stocker ça dans l'EEPROM de l'Arduino ou dans un fichier protégé sur le Pi.

Pensez aussi à isoler vos robots sur un réseau dédié... Si vous utilisez un Pi, créez un VLAN séparé et bloquez tout trafic sortant non autorisé avec iptables. Comme ça, même si un robot est compromis, il ne pourra pas exfiltrer de données ni attaquer d'autres machines.

Ah et désactivez aussi le Bluetooth quand vous n'en avez pas besoin ! Sur un Pi, ajoutez « dtoverlay=disable-bt » dans /boot/config.txt et sur Arduino, c'est encore plus simple, si vous utilisez pas le BLE, ne l'incluez pas dans votre projet.

Bref, ces robots sont de vrais chevaux de Troie ambulants. Ils ont des capteurs, des caméras, des micros, et maintenant ils peuvent être compromis par n'importe qui à portée de Bluetooth... Donc si vous bossez sur des projets robotiques, prenez le temps de sécuriser vos communications sans fil avant de vous retrouver avec un robot qui décide de vous tuer !! Et bookmarkez ce lien car c'est là où je mets toutes mes meilleures news robotiques !

Et si vous êtes encore en train de lire mes articles à cette heure-ci, je vous souhaite un excellent Noël !

Source

Deep Snake - Le Snake sous LSD qui cartonne sur Steam (et c'est gratuit)

Par :Korben
23 décembre 2025 à 09:00

Vous vous souvenez du jeu Snake qu'on avait sur les vieux Nokia ?? Ça nous faisait perdre des heures à bouffer des pixels en évitant de se mordre la queue et moi perso, y'a rien qui m'énervait plus.

Et bien un développeur indé du nom de Dietzribi a décidé de le réinventer totalement façon trip sous acide dans un univers non-euclidien, et le résultat est complètement barré, vous allez voir !

Deep Snake c'est donc un Snake, mais en 3D avec des visuels néon psychédéliques qui pulsent au rythme de votre progression. Vous mangez des pommes, votre serpent grandit, et vous devez éviter les obstacles. Jusque là, c'est du classique sauf que, vous pouvez aller toujours plus profond dans le vide, littéralement, puisque l'espace se tord, se replie sur lui-même, et plus vous avancez, plus ça devient trippy.

Ce jeu a été créé en 48 heures lors de la GMTK Game Jam 2024 et le dev a tout fait tout seul, ce qui visiblement lui a plutôt bien réussi puisque le jeu cartonne sur Steam avec 96% d'avis positifs et une place dans le top 3 des tendances.

Côté gameplay, on est sur du mouvement fluide (pas de grille à l'ancienne), des arènes dynamiques avec des dangers qui pulsent et des palettes de couleurs qui changent en permanence, et même un système de bonus risque/récompense qui donne des capacités temporaires. Y'a aussi des leaderboards quotidiens et all-time pour les acharnés du high score, plus des achievements Steam et de la sauvegarde dans le cloud.

Le jeu tourne sur à peu près n'importe quoi (Intel HD 4000, 4 Go de RAM, 100 Mo de stockage), y'a même une version en ligne , supporte 13 langues dont le français, et surtout... il est totalement gratuit. Ça va faire plaisir aux radins !

Voilà, si vous êtes chaud pour vous faire un petit trip visuel entre deux trucs sérieux, Deep Snake est clairement fait pour vous les amis. Les sessions sont courtes, la vitesse augmente progressivement, et quand vous vous plantez c'est immédiat donc vous pouvez ressayer directement.

Et merci à Lorenper pour avoir déniché cette pépite !

Deep Snake - Le Snake sous LSD qui cartonne sur Steam (et c'est gratuit)

Par :Korben
23 décembre 2025 à 09:00

Vous vous souvenez du jeu Snake qu'on avait sur les vieux Nokia ?? Ça nous faisait perdre des heures à bouffer des pixels en évitant de se mordre la queue et moi perso, y'a rien qui m'énervait plus.

Et bien un développeur indé du nom de Dietzribi a décidé de le réinventer totalement façon trip sous acide dans un univers non-euclidien, et le résultat est complètement barré, vous allez voir !

Deep Snake c'est donc un Snake, mais en 3D avec des visuels néon psychédéliques qui pulsent au rythme de votre progression. Vous mangez des pommes, votre serpent grandit, et vous devez éviter les obstacles. Jusque là, c'est du classique sauf que, vous pouvez aller toujours plus profond dans le vide, littéralement, puisque l'espace se tord, se replie sur lui-même, et plus vous avancez, plus ça devient trippy.

Ce jeu a été créé en 48 heures lors de la GMTK Game Jam 2024 et le dev a tout fait tout seul, ce qui visiblement lui a plutôt bien réussi puisque le jeu cartonne sur Steam avec 96% d'avis positifs et une place dans le top 3 des tendances.

Côté gameplay, on est sur du mouvement fluide (pas de grille à l'ancienne), des arènes dynamiques avec des dangers qui pulsent et des palettes de couleurs qui changent en permanence, et même un système de bonus risque/récompense qui donne des capacités temporaires. Y'a aussi des leaderboards quotidiens et all-time pour les acharnés du high score, plus des achievements Steam et de la sauvegarde dans le cloud.

Le jeu tourne sur à peu près n'importe quoi (Intel HD 4000, 4 Go de RAM, 100 Mo de stockage), y'a même une version en ligne , supporte 13 langues dont le français, et surtout... il est totalement gratuit. Ça va faire plaisir aux radins !

Voilà, si vous êtes chaud pour vous faire un petit trip visuel entre deux trucs sérieux, Deep Snake est clairement fait pour vous les amis. Les sessions sont courtes, la vitesse augmente progressivement, et quand vous vous plantez c'est immédiat donc vous pouvez ressayer directement.

Et merci à Lorenper pour avoir déniché cette pépite !

CENI - La Chine vient de mettre en service l'héritier d'ARPANET

Par :Korben
21 décembre 2025 à 17:11

Vous vous souvenez d'ARPANET, ce réseau militaire américain des années 60-70 qui a donné naissance à Internet ? En 2006, les États-Unis ont remis le couvert avec GENI, un autre réseau de recherche afin de tester les technologies futures du web et qui a fonctionné durant plus d'une décennie avant de s'éteindre progressivement jusqu'en 2023.

Et aujourd'hui, devinez qui vient de prendre le relais ?

La Chine, évidemment !

Le pays vient en effet d'annoncer la mise en service officielle de CENI, pour China Environment for Network Innovation, qui est ni plus ni moins que la première infrastructure nationale chinoise dédiée à l'innovation dans les technologies réseau. Et les specs font un peu peur je dois dire...

Il s'agit d'un réseau qui relie 40 villes chinoises via plus de 55 000 km de fibre optique et qui a nécessité plus de 10 ans de construction. Le truc est capable de supporter 128 réseaux hétérogènes simultanément et de mener 4 096 tests de services en parallèle... Niveau chiffres, c'est assez costaud.

Pour vous donner une idée de ce que ça peut faire, ils ont effectué un test de transfert avec le radiotélescope FAST dans la province du Guizhou. Résultat, 72 téraoctets de données transférées vers la province du Hubei en à peine 1,6 heure sur une distance d'environ 1 000 km. Un calcul rapide nous donne un débit proche des 100 Gbit/s soutenu sur la durée... Sur une connexion fibre grand public à 1 Gbit/s, ce même transfert aurait pris environ une semaine.

Mais le plus impressionnant, c'est la stabilité du bouzin. D'après Liu Yunjie, le chef scientifique du labo Zijin Mountain, CENI affiche zéro perte de paquets lors des tests avec une gigue de latence inférieure à 20 microsecondes, même à pleine charge, sur un backbone de 13 000 km traversant 13 provinces et gérant 10 000 services déterministes. Ce sont des performances qu'on ne voit pas tous les jours...

Et côté applications, c'est le feu ! Huawei et Baidu sont déjà sur le coup pour tester leurs technos respectives, notamment pour des modèles d'IA avec 100 milliards de paramètres, dont chaque itération ne prend que 16 secondes grâce aux débits de CENI pour synchroniser les GPU. Y'a aussi le support des technologies 5G-A et 6G qui est prévu, ainsi que des applications pour l'industrie, l'énergie, la santé et l'éducation.

Et leur prochaine étape, ça va être de connecter 100 universités et entreprises leaders au réseau.

L'objectif avoué de CENI, c'est donc de développer des innovations "5 à 10 ans en avance sur l'industrie" et de, je cite, "prendre l'initiative dans la compétition internationale en matière de cyberespace". Bref, c'est aussi une question de souveraineté technologique et de positionnement géopolitique.

Screenshot

La Chine reprend donc explicitement le flambeau des projets de recherche réseau américains désormais abandonnés. ARPANET a ouvert la voie dans les années 70, GENI a pris le relais jusqu'en 2023, et maintenant c'est CENI qui devient le laboratoire mondial pour les architectures réseau du futur.

Et avec ses 221 brevets déposés, 139 droits d'auteur logiciels et 206 normes internationales et nationales, le projet a déjà une belle base de propriété intellectuelle...

Donc on verra bien ce qui en sortira dans les années à venir...

Source

OBS Studio 32 débarque avec un tout nouveau moteur de rendu pour macOS

Par :Korben
21 décembre 2025 à 10:07

Passer d'OpenGL à Metal, c'était visiblement pas une mince affaire pour l'équipe d'OBS. La techno d'Apple est sortie y'a 10 ans sous macOS mais ça leur a pris un peu de temps pour la migration... Et n'allez pas croire que je "juge"... Tout ce temps, c'est normal car c'est un soft multiplateforme, donc faut gérer trois écosystèmes en parallèle et ça, ça prend un temps fou.

Tous les effets visuels d'OBS ont dû être réécrits pour fonctionner avec Metal, le langage graphique d'Apple étant bien plus exigeant que celui de Windows et la preview peut parfois légèrement saccader à cause de macOS, mais le flux final reste impeccable.

Niveau performances, Metal fait aussi bien voire mieux qu'OpenGL dans les builds Release mais c'est surtout pour le débogage que ça change tout car les développeurs ont maintenant accès à des outils de diagnostic bien plus performants, ce qui devrait accélérer les corrections de bugs et les futures améliorations.

Pour l'activer (ouais on est chaud !!), c'est hyper simple. Vous allez dans les paramètres d'OBS 32.0, onglet Avancé, section Vidéo, et vous sélectionnez Metal dans le menu déroulant du renderer. Un petit redémarrage de l'appli et hop, vous êtes passé sur le nouveau moteur.

Ce qui est cool aussi avec cette version 32.0, c'est qu'elle inclut un gestionnaire de plugins et des améliorations pour les fonctionnalités NVIDIA RTX.

L'équipe OBS bosse aussi sur des backends Vulkan pour Linux et Direct3D 12 pour Windows, parce que les anciennes APIs comme OpenGL et D3D11 reçoivent de moins en moins de support des fabricants de GPU, donc si vous êtes sur Linux ou Windows, votre tour viendra aussi.

Voilà, après si ça bug, revenez sur OpenGL, mais y'a quand même de bonnes chances que ça tourne mieux qu'avant.

Source

Quand une caméra de surveillance TP-Link laisse traîner ses clés HTTPS partout...

Par :Korben
20 décembre 2025 à 19:04

Vous avez peut-être une caméra Tapo C200 qui tourne chez vous pour surveiller le chat, le bébé ou l'entrée. C'est mon cas et j'adore cette caméra mais j'ai une mauvaise nouvelle à vous annoncer... Le chercheur en sécurité Simone Margaritelli (alias evilsocket) vient de passer 150 jours à la disséquer et le résultat n'est pas glorieux pour TP-Link.

Alors déjà, commençons par le plus gros WTF qu'il a découvert... la clé privée HTTPS de la caméra, ce truc censé être ultra-secret qui permet de chiffrer les communications. Et bien elle est hardcodée dans le firmware. C'est donc la même clé pour TOUTES les caméras du même modèle. Du coup, n'importe qui peut faire un Man-in-the-Middle et intercepter ce que vous voyez sur votre caméra. Ah on se met bien déjà là, hein ? ^^

Et attendez, ça ne s'arrête pas là puisque Margaritelli a trouvé un bucket S3 chez Amazon, totalement ouvert au public, qui contient TOUS les firmwares de TOUS les produits TP-Link. C'est open bar, sans authentification, Noël avant l'heure pour les chercheurs en sécu... et les hackers.

En fouillant le firmware avec Ghidra et Claude (oui, l'IA a aidé au reverse engineering), le chercheur a découvert quatre failles critiques. La première, c'est un buffer overflow dans le parser SOAP XML utilisé par le protocole ONVIF. En gros, si vous envoyez un message trop long, la caméra plante. Pas besoin d'être authentifié pour ça, une requête HTTP suffit.

La deuxième faille est du même genre mais dans le header Content-Length. Envoyez 4294967295 (le max d'un entier 32 bits) et boum, integer overflow. Et la troisième, c'est la cerise sur le gâteau puisque l'endpoint connectAp reste accessible sans authentification même après le setup initial. Du coup, un attaquant peut forcer votre caméra à se connecter à son propre réseau WiFi malveillant et intercepter tout le flux vidéo. Vous ne vous y attendiez pas à celle-là, si ?

Et la quatrième faille, oubliée nulle part ailleurs c'est l'API scanApList qui balance la liste de tous les réseaux WiFi autour de la caméra, sans auth. Avec les BSSID récupérés et un outil comme apple_bssid_locator, on peut géolocaliser physiquement la caméra à quelques mètres près. Sur les 25 000 caméras exposées sur le net, ça fait froid dans le dos.

Le plus frustrant dans cette histoire, c'est que Margaritelli a signalé tout ça en juillet 2025 et TP-Link a demandé des rallonges de délai, encore et encore, durant plus de 150 jours. Et au final, les failles ont été corrigées mais pas de patch sur les pages publiques des CVE. Ah et petit détail rigolo, comme TP-Link est sa propre autorité de numérotation CVE, ils s'auto-évaluent sur leurs propres failles. Donc y'a pas de conflit d'intérêt du tout... ahem ahem...

Le chercheur estime qu'environ 25 000 de ces caméras sont exposées directement sur Internet donc si comme moi, vous en avez une, vérifiez que le firmware est bien à jour et surtout, ne l'exposez JAMAIS directement sur le net. Mettez-la derrière un VPN ou un réseau isolé.

Je trouve ça cool que Margaritelli ait utilisé de l'IA pour accélérer la phase de reverse engineering. Avec Claude Opus et Sonnet avec GhidraMCP, il a pu analyser le code assembleur et c'est comme ça que l'IA a identifié rapidement les fonctions vulnérables et expliqué le fonctionnement du code. Bref, l'IA comme outil de hacking, c'est assez ouf...

Voilà, donc si vous avez du matos TP-Link chez vous, gardez un œil sur les mises à jour et réfléchissez à deux fois avant de l'exposer sur le net. Et si vous aimez la lecture, l'analyse complète est dispo sur le blog d'evilsocket .

Beau boulot !

From building a workforce to boosting research and education – future quantum leaders have their say

16 décembre 2025 à 12:15

The International Year of Quantum Science and Technology has celebrated all the great developments in the sector – but what challenges and opportunities lie in store? That was the question deliberated by four future leaders in the field at the Royal Institution in central London in November. The discussion took place during the two-day conference “Quantum science and technology: the first 100 years; our quantum future”, which was part of a week-long series of quantum-related events in the UK organized by the Institute of Physics.

As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists.

Two of the speakers – Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum) – are from the quantum tech industry, with Mehul Malik (Heriot-Watt University) and Sarah Alam Malik (University College London) based in academia. The following is an edited version of the discussion.

Quantum’s future leaders

Muhammad Hamza Waseem, Sarah Alam Malik, Mehul Malik, Nicole Gillett and Matin Durrani
Deep thinkers The challenges and opportunities for quantum science and technology were discussed during a conference organized by the Institute of Physics at the Royal Institution on 5 November 2025 by (left to right, seated) Muhammad Hamza Waseem; Sarah Alam Malik; Mehul Malik; and Nicole Gillett. The discussion was chaired by Physics World editor-in-chief Matin Durrani (standing, far right). (Courtesy: Tushna Commissariat)

Nicole Gillett is a senior software engineer at Riverlane, in Cambridge, UK. The company is a leader in quantum error correction, which is a critical part of a fully functioning, fault-tolerant quantum computer. Errors arise because quantum bits, or qubits, are so fragile and correcting them is far trickier than with classical devices. Riverlane is therefore trying to find ways to correct for errors without disturbing a device’s quantum states. Gillett is part of a team trying to understand how best to implement error-correcting algorithms on real quantum-computing chips.

Mehul Malik, who studied physics at a liberal arts college in New York, was attracted to quantum physics because of what he calls a “weird middle ground between artistic creative thought and the rigour of physics”. After doing a PhD at the University of Rochester, he spent five years as a postdoc with Anton Zeilinger at the University of Vienna in Austria before moving to Heriot-Watt University in the UK. As head of its Beyond Binary Quantum Information research group, Malik works on quantum information processing and communication and fundamental studies of entanglement.

Sarah Alam Malik is a particle physicist at University College London, using particle colliders to detect and study potential candidates for dark matter. She is also trying to use quantum computers to speed up the discovery of new physics given that what she calls “our most cherished and compelling theories” for physics beyond the Standard Model, such as supersymmetry, have not yet been seen. In particular, Malik is trying to find new physics in a way that’s “model agnostic” – in other words, using quantum computers to search particle-collision data for anomalous events that have not been seen before.

Muhammad Hamza Waseem studied electrical engineering in Pakistan, but got hooked on quantum physics after getting involved in recreating experiments to test Bell’s inequalities in what he claims was the first quantum optics lab in the country. Waseem then moved to the the University of Oxford in the UK, to do a PhD studying spin waves to make classical and quantum logic circuits. Unable to work when his lab shut during the COVID-19 pandemic, Waseem approached Quantinuum to see if he could help them in their quest to build quantum computers using ion traps. Now based at the company, he studies how quantum computers can do natural-language processing. “Think ChatGPT, but powered with quantum computers,” he says.

What will be the biggest or most important application of quantum technology in your field over the next 10 years?

Nicole Gillett: If you look at roadmaps of quantum-computing companies, you’ll find that IBM, for example, intends to build the world’s first utility scale and fault-tolerant quantum computer by the end of the decade. Beyond 2033, they’re committing to have a system that could support 2000 “logical qubits”, which are essentially error-corrected qubits, in which the data of one qubit has been encoded into many qubits.

What can be achieved with that number of qubits is a difficult question to answer but some theorists, such as Juan Maldacena, have proposed some very exotic ideas, such as using a system of 7000 qubits to simulate black-hole dynamics. Now that might not be a particularly useful industry application, but it tells you about the potential power of a machine like this.

Mehul Malik: In my field, quantum networks that can distribute individual quantum particles or entangled states over large and short distances will have a significant impact within the next 10 years. Quantum networks will connect smaller, powerful quantum processors to make a larger quantum device, whether for computing or communication. The technology is quite mature – in fact, we’ve already got a quantum network connecting banks in London.

I will also add something slightly controversial. We often try to distinguish between quantum and non-quantum technologies, but what we’re heading towards is combining classical state-of-the-art devices with technology based on inherently quantum effects – what you might call “quantum adjacent technology”. Single-photon detectors, for example, are going to revolutionize healthcare, medical imaging and even long-distance communication.

Sarah Alam Malik: For me, the biggest impact of quantum technology will be applying quantum computing algorithms in physics. Can we quantum simulate the dynamics of, say, proton–proton collisions in a more efficient and accurate manner? Can we combine quantum computing with machine learning to sift through data and identify anomalous collisions that are beyond those expected from the Standard Model?

Quantum technology is letting us ask very fundamental questions about nature.

Sarah Alam Malik, University College London

Quantum technology, in other words, is letting us ask very fundamental questions about nature. Emerging in theoretical physics, for example, is the idea that the fundamental layer of reality may not be particles and fields, but units of quantum information. We’re looking at the world through this new quantum-theoretic lens and asking questions like, whether it’s possible to measure entanglement in top quarks and even explore Bell-type inequalities at particle colliders.

One interesting quantity is “magic”, which is a measure of how far you are from having something that can be classically simulable (Phys. Rev. D 110 116016). The more magic there is in a system the less easy it is to simulate classically – and therefore  the greater the computational resource it possesses for quantum computing. We’re asking how much “magic” there is in, for instance, top quarks produced at the Large Hadron Collider. So one of the most important developments for me may well be asking questions in a very different way to before.

Muhammad Hamza Waseem: Technologically speaking, the biggest impact will be simulating quantum systems using a quantum computer. In fact, researchers from Google already claim to have simulated a wormhole in a quantum computer, albeit a very simple version that could have been tackled with a classical device (Nature 612 55).

But the most significant impact has to do with education. I believe quantum theory teaches us that reality is not about particles and individuals – but relations. I’m not saying that particles don’t exist but they emerge from the relations. In fact, with colleagues at the University of Oxford, we’ve used this idea to develop a new way of teaching quantum theory, called Quantum in Pictures.

We’ve already tried our diagrammatic approach with a group of 16–18-year-olds, teaching them the entire quantum-information course that’s normally given to postgraduates at Oxford. At the end of our two-month course, which had one lecture and tutorial per week, students took an exam with questions from past Oxford papers. An amazing 80% of students passed and half got distinctions.

For quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

Muhammad Hamza Waseem, Quantinuum

I’ve also tried the same approach on pupils in Pakistan: the youngest, who was just 13, can now explain quantum teleportation and quantum entanglement. My point is that for quantum theory to have a big impact, we have to make quantum physics more accessible to everyone.

What will be the biggest challenges and difficulties over the next 10 years for people in quantum science and technology?

Nicole Gillett: The challenge will be building up a big enough quantum workforce. Sometimes people hear the words “quantum computer” and get scared, worrying they’re going to have to solve Hamiltonians all the time. But is it possible to teach students at high-school level about these concepts? Can we get the ideas across in a way that is easy to understand so people are interested and excited about quantum computing?

At Riverlane, we’ve run week-long summer workshops for the last two years, where we try to teach undergraduate students enough about quantum error correction so they can do “decoding”. That’s when you take the results of error correction and try to figure out what errors occurred on your qubits. By combining lectures and hands-on tutorials we found we could teach students about error corrections – and get them really excited too.

Our biggest challenge will be not having a workforce ready for quantum computing.

Nicole Gillett, Riverlane

We had students from physics, philosophy, maths and computer science take the course – the only pre-requisite, apart from being curious about quantum computers, is some kind of coding ability. My point is that these kinds of boot camps are going to be so important to inspire future generations. We need to make the information accessible to people because otherwise our biggest challenge will be not having a workforce ready for quantum computing.

Mehul Malik: One of the big challenges is international cooperation and collaboration. Imagine if, in the early days of the Internet, the US military had decided they’d keep it to themselves for national-security reasons or if CERN hadn’t made the World Wide Web open source. We face the same challenge today because we live in a world that’s becoming polarized and protectionist – and we don’t want that to hamper international collaboration.

Over the last few decades, quantum science has developed in a very international way and we have come so far because of that. I have lived in four different continents, but when I try to recruit internationally, I face significant hurdles from the UK government, from visa fees and so on. To really progress in quantum tech, we need to collaborate and develop science in a way that’s best for humanity not just for each nation.

Sarah Alam Malik: One of the most important challenges will be managing the hype that inevitably surrounds the field right now. We’ve already seen this with artificial intelligence (AI), which has gone though the whole hype cycle. Lots of people were initially interested, then the funding dried up when reality didn’t match expectations. But now AI has come back with such resounding force that we’re almost unprepared for all the implications of it.

Quantum can learn from the AI hype cycle, finding ways to manage expectations of what could be a very transformative technology. In the near- and mid-term, we need to not overplay things and be cautious of this potentially transformative technology – yet be braced for the impact it could potentially have. It’s a case of balancing hype with reality.

Muhammad Hamza Waseem: Another important challenge is how to distribute funding between research on applications and research on foundations. A lot of the good technology we use today emerged from foundational ideas in ways that were not foreseen by the people originally working on them. So we must ensure that foundational research gets the funding it deserves or we’ll hit a dead end at some point.

Will quantum tech alter how we do research, just as AI could do?

Mehul Malik: AI is already changing how I do research, speeding up the way I discover knowledge. Using Google Gemini, for example, I now ask my browser questions instead of searching for specific things. But you still have to verify all the information you gather, for example, by checking the links it cites. I recently asked AI a complex physics question to which I knew the answer and the solution it gave was terrible. As for how quantum is changing research, I’m less sure, but better detectors through quantum-enabled research will certainly be good.

Muhammad Hamza Waseem: AI is already being deployed in foundational research, for example, to discover materials for more efficient batteries. A lot of these applications could be integrated with quantum computing in some way to speed work up. In other words, a better understanding of quantum tech will let us develop AI that is safer, more reliable, more interpretable – and if something goes wrong, you know how to fix it. It’s an exciting time to be a researcher, especially in physics.

Sarah Alam Malik: I’ve often wondered if AI, with the breadth of knowledge that it has across all different fields, already has answers to questions that we couldn’t answer – or haven’t been able to answer – just because of the boundaries between disciplines. I’m a physicist and so can’t easily solve problems in biology. But could AI help us to do breakthrough research at the interface between disciplines?

What lessons can we learn from the boom in AI when it comes to the long-term future of quantum tech?

Nicole Gillett: As a software engineer, I once worked at an Internet security company called CloudFlare, which taught me that it’s never too early to be thinking about how any new technology – both AI and quantum – might be abused. What’s also really interesting is whether AI and machine learning can be used to build quantum computers by developing the coding algorithms they need. Companies like Google are active in this area and so are Riverlane too.

Mehul Malik: I recently discussed this question with a friend who works in AI, who said that the huge AI boom in industry, with all the money flowing in to it, has effectively killed academic research in the field. A lot of AI research is now industry-led and goal-orientated – and there’s a risk that the economic advantages of AI will kill curiosity-driven research. The remedy, according to my friend, is to pay academics in AI more as they are currently being offered much larger salaries to work in the private sector.

We need to diversify so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

Mehul Malik, Heriot-Watt University

Another issue is that a lot of power is in the hands a just a few companies, such as Nvidia and ASML. The lesson for the quantum sector is that we need to diversify early on so that the power to control or chart the course of quantum technologies is not in the hands of a few privileged monopolies.

Sarah Alam Malik: Quantum technology has a lot to learn from AI, which has shown that we need to break down the barriers between disciplines. After all, some of the most interesting and impactful research in AI has happened because companies can hire whoever they need to work on a particular problem, whether it’s a computer scientist, a biologist, a chemist, a physicist or a mathematician.

Nature doesn’t differentiate between biology and physics. In academia we not only need people who are hyper specialized but also a crop of generalists who are knee-deep in one field but have experience in other areas too.

The lesson from the AI boom is to blur the artificial boundaries between disciplines and make them more porous. In fact, quantum is a fantastic playground for that because it is inherently interdisciplinary. You have to bring together people from different disciplines to deliver this kind of technology.

Muhammad Hamza Waseem: AI research is in a weird situation where there are lots of excellent applications but so little is understood about how AI machines work. We have no good scientific theory of intelligence or of consciousness. We need to make sure that quantum computing research does not become like that and that academic research scientists are well-funded and not distracted by all the hype that industry always creates.

At the start of the previous century, the mathematician David Hilbert said something like “physics is becoming too difficult for the physicists”. I think quantum computing is also somewhat becoming too challenging for the quantum physicists. We need everyone to get involved for the field to reach its true potential.

Towards “green” quantum technology

Green leaf on the converging point of computer circuit board
(Courtesy: iStock/Peach)

Today’s AI systems use vast amounts of energy, but should we also be concerned about the environmental impact of quantum computers? Google, for example, has already carried out quantum error-correction experiments in which data from the company’s quantum computers had to be processed once every microsecond per round of error correction (Nature 638 920). “Finding ways to process it to keep up with the rate at which it’s being generated is a very interesting area of research,” says Nicole Gillett.

However, quantum computers could cut our energy consumption by allowing calculations to be performed far more quickly and efficiently than is possible with classical machines. For Mehul Malik, another important step towards “green” quantum technology will be to lower the energy that quantum devices require and to build detectors that work at room temperature and are robust against noise. Quantum computers themselves can also help, he thinks, by discovering energy-efficient technologies, materials and batteries.

A quantum laptop?

Futuristic abstract low poly wireframe vector illustration with glowing briefcase and speech bubbles
(Courtesy: iStock/inkoly)

Will we ever see portable quantum computers or will they always be like today’s cloud-computing devices in distant data centres? Muhammad Hamza Waseem certainly does not envisage a word processor that uses a quantum computer. But he points to companies like SPINQ, which has built a two quantum bit computer for educational purposes. “In a sense, we already have a portable quantum computer,” he says. For Mehul Malik, though, it’s all about the market. “If there’s a need for it,” he joked, “then somebody will make it.”

If I were science minister…

Politician speaking to reporters illustration
(Courtesy: Shutterstock/jenny on the moon)

When asked by Peter Knight – one of the driving forces behind the UK’s quantum-technology programme – what the panel would do if they were science minister, Nicole Gillett said she would seek to make the UK the leader in quantum computing by investing heavily in education. Mehul Malik would cut the costs of scientists moving across borders, pointing out that many big firms have been founded by immigrants. Sarah Alam Malik called for long-term funding – and not to give up if short-term gains don’t transpire. Muhammad Hamza Waseem, meanwhile, said we should invest more in education, research and the international mobility of scientists.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post From building a workforce to boosting research and education – future quantum leaders have their say appeared first on Physics World.

Institute of Physics celebrates 2025 Business Award winners at parliamentary event

12 décembre 2025 à 12:00

A total of 14 physics-based firms in sectors from quantum and energy to healthcare and aerospace have won 2025 Business Awards from the Institute of Physics (IOP), which publishes Physics World. The awards were presented at a reception in the Palace of Westminster yesterday attended by senior parliamentarians and policymakers as well as investors, funders and industry leaders.

The IOP Business Awards, which have been running since 2012, recognise the role that physics and physicists play in the economy, creating jobs and growth “by powering innovation to meet the challenges facing us today, ranging from climate change to better healthcare and food production”. More than 100 firms have now won Business Awards, with around 90% of those companies still commercially active.

The parliamentary event honouring the 2025 winners were hosted by Dave Robertson, the Labour MP for Lichfield, who spent 10 years as a physics teacher in Birmingham before working for teaching unions. There was also a speech from Baron Sharma, who studied applied physics before moving into finance and later becoming a Conservative MP, Cabinet minister and president of the COP-26 climate summit.

Seven firms were awarded 2025 IOP Business Innovation Awards, which recognize companies that have “delivered significant economic and/or societal impact through the application of physics”. They include Oxford-based Tokamak Energy, which has developed “compact, powerful, robust, quench-resilient” high-temperature superconducting magnets for commercial fusion energy and for  propulsion systems, accelerators and scientific instruments.

(courtesy: Carmen Valino)

Oxford Instruments was honoured for developing a novel analytical technique for scanning electron microscopes, enabling new capabilities and accelerating time to results by at least an order of magnitude. Ionoptika, meanwhile, was recognized for developing Q-One, which is a new generation of focused ion-beam instrumentation, providing single atom through to high-dose nanoscale advanced materials engineering for photonic and quantum technologies.

The other four winners were: electronics firm FlexEnable for their organic transistor materials; Lynkeos Technology for the development of muonography in the nuclear industry; the renewable energy company Sunamp for their thermal storage system; and the defence and security giant Thales UK for the development of a solid-state laser for laser rangefinders.

Business potential

Six other companies have won an IOP Start-up Award, which celebrates young companies “with a great business idea founded on a physics invention, with the potential for business growth and significant societal impact”. They include Astron Systems for developing “long-lifetime turbomachinery to enable multi-reuse small rocket engines and bring about fully reusable small launch vehicles”, along with MirZyme Therapeutics for “pioneering diagnostics and therapeutics to eliminate preeclampsia and transform maternal health”.

The other four winners were: Celtic Terahertz Technology for a metamaterial filter technology; Nellie Technologies for a algae-based carbon removal technology; Quantum Science for their development of short-wave infrared quantum dot technology; and Wayland Additive for the development and commercialisation of charge-neutralised electron beam metal additive manufacturing.

James McKenzie, a former vice-president for business at the IOP, who was involved in judging the awards, says that all awardees are “worthy winners”. “It’s the passion, skill and enthusiasm that always impresses me,” McKenzie told Physics World.

iFAST Diagnostics were also awarded the IOP Lee Lucas Award that recognises early-stage companies taking innovative products into the medical and healthcare sector. The firm, which was spun out of the University of Southampton, develops blood tests that can test the treatment of bacterial infections in a matter of hours rather than days. They are expecting to have approval for testing next year.

“Especially inspiring was the team behind iFAST,” adds McKenzie, “who developed a method to test very rapid tests cutting time from 48 hours to three hours, so patients can be given the right antibiotics.”

“The award-winning businesses are all outstanding examples of what can be achieved when we build upon the strengths we have, and drive innovation off the back of our world-leading discovery science,” noted Tom Grinyer, IOP chief executive officer. “In the coming years, physics will continue to shape our lives, and we have some great strengths to build upon here in the UK, not only in specific sectors such as quantum, semiconductors and the green economy, but in our strong academic research and innovation base, our growing pipeline of spin-out and early-stage companies, our international collaborations and our growing venture capital community.”

For the full list of winners, see here.

The post Institute of Physics celebrates 2025 Business Award winners at parliamentary event appeared first on Physics World.

Unitree R1 : le robot humanoïde qui va faire trembler le marché… dès 4900 $ !

Par :Sabine
4 décembre 2025 à 15:15

Un robot humanoïde à moins de 5 000 $ ? Unitree l’a fait. Le R1 débarque avec une agilité et une intelligence dignes d’un robot de science-fiction, mais enfin à portée de budget. Une petite révolution prête à bouleverser les usages ! Louez un robot Unitree pour vos démonstrations et événements La robotique humanoïde accessible ... Lire plus

L’article Unitree R1 : le robot humanoïde qui va faire trembler le marché… dès 4900 $ ! est apparu en premier sur RealiteVirtuelle.com.

When is good enough ‘good enough’?

1 décembre 2025 à 12:00

Whether you’re running a business project, carrying out scientific research, or doing a spot of DIY around the house, knowing when something is “good enough” can be a tough question to answer. To me, “good enough” means something that is fit for purpose. It’s about striking a balance between the effort required to achieve perfection and the cost of not moving forward. It’s an essential mindset when perfection is either not needed or – as is often the case – not attainable.

When striving for good enough, the important thing to focus on is that your outcome should meet expectations, but not massively exceed them. Sounds simple, but how often have we heard people say things like they’re “polishing coal”, striving for “gold plated” or “trying to make a silk purse out of a sow’s ear”. It basically means they haven’t understood, defined or even accepted the requirements of the end goal.

Trouble is, as we go through school, college and university, we’re brought up to believe that we should strive for the best in whatever we study. Those with the highest grades, we’re told, will probably get the best opportunities and career openings. Unfortunately, this approach means we think we need to aim for perfection in everything in life, which is not always a good thing.

How to be good enough

So why is aiming for “good enough” a good thing to do? First, there’s the notion of “diminishing returns”. It takes a disproportionate amount of effort to achieve the final, small improvements that most people won’t even notice. Put simply, time can be wasted on unnecessary refinements, as embodied by the 80/20 rule (see box).

The 80/20 rule: the guiding principle of “good enough”

Also known as the Pareto principle – in honour of the Italian economist Vilfredo Pareto who first came up with the idea – the 80/20 rule states that for many outcomes, 80% of consequences or results come from 20% of the causes or effort. The principle helps to identify where to prioritize activities to boost productivity and get better results. It is a guideline, and the ratios can vary, but it can be applied to many things in both our professional and personal lives.

Examples from the world of business include the following:

Business sales: 80% of a company’s revenue might come from 20% of its customers.

Company productivity: 80% of your results may come from 20% of your daily tasks.

Software development: 80% of bugs could be caused by 20% of the code.

Quality control: 20% of defects may cause 80% of customer complaints.

Good enough also helps us to focus efforts. When a consumer or customer doesn’t know exactly what they want, or a product development route is uncertain, it can be better to deliver things in small chunks. Providing something basic but usable can be used to solicit feedback to help clarify requirements or make improvements or additions that can be incorporated into the next chunk. This is broadly along the lines of a “minimum viable product”.

Not seeking perfection reminds us too that solutions to problems are often uncertain. If it’s not clear how, or even if, something might work, a proof of concept (PoC) can instead be a good way to try something out. Progress can be made by solving a specific technical challenge, whether via a basic experiment, demonstration or short piece of research. A PoC should help avoid committing significant time and resource to something that will never work.

Aiming for “good enough” naturally leads us to the notion of “continuous improvement”. It’s a personal favourite of mine because it allows for things to be improved incrementally as we learn or get feedback, rather than producing something in one go and then forgetting about it. It helps keep things current and relevant and encourages a culture of constantly looking for a better way to do things.

Finally, when searching for good enough, don’t forget the idea of ballpark estimates. Making approximations sounds too simple to be effective, but sometimes a rough estimate is really all you need. If an approximate guess can inform and guide your next steps or determine whether further action will be necessary then go for it. 

The benefits of good enough

Being good enough doesn’t just lead to practical outcomes, it can benefit our personal well-being too. Our time, after all, is a precious commodity and we can’t magically increase this resource. The pursuit of perfection can lead to stagnation, and ultimately burnout, whereas achieving good enough allows us to move on in a timely fashion.

A good-enough approach will even make you less stressed. By getting things done sooner and achieving more, you’ll feel freer and happier about your work even if it means accepting imperfection. Mistakes and errors are inevitable in life, so don’t be afraid to make them; use them as learning opportunities, rather than seeing them as something bad. Remember – the person who never made a mistake never got out of bed.

Recognizing that you’ve done the best you can for now is also crucial for starting new projects and making progress. By accepting good enough you can build momentum, get more things done, and consistently take actions toward achieving your goals.

Finally, good enough is also about shared ownership. By inviting someone else to look at what you’ve done, you can significantly speed up the process. In my own career I’ve often found myself agonising over some obscure detail or feeling something is missing, only to have my quandary solved almost instantly simply by getting someone else involved – making me wish I’d asked them sooner.

Caveats and conclusions

Good enough comes with some caveats. Regulatory or legislative requirements means there will always be projects that have to reach a minimum standard, which will be your top priority. The precise nature of good enough will also depend on whether you’re making stuff (be it cars or computers) or dealing with intangible commodities such as software or services.

So what’s the conclusion? Well, in the interests of my own time, I’ve decided to apply the 80/20 rule and leave it to you to draw your own conclusion. As far as I’m concerned, I think this article has been good enough, but I’m sure you’ll let me know if it hasn’t. Consider it as a minimally viable product that I can update in a future column.

The post When is good enough ‘good enough’? appeared first on Physics World.

The future of quantum physics and technology debated at the Royal Institution

14 novembre 2025 à 17:41

As we enter the final stretch of the International Year of Quantum Science and Technology (IYQ), I hope you’ve enjoyed our extensive quantum coverage over the last 12 months. We’ve tackled the history of the subject, explored some of the unexplained mysteries that still make quantum physics so exciting, and examined many of the commercial applications of quantum technology. You can find most of our coverage collected into two free-to-read digital Quantum Briefings, available here and here on the Physics World website.

Over the last 100 years since Werner Heisenberg first developed quantum mechanics on the island of Helgoland in June 1925, quantum mechanics has proved to be an incredibly powerful, successful and logically consistent theory. Our understanding of the subatomic world is no longer the “lamentable hodgepodge of hypotheses, principles, theorems and computational recipes”, as the Israeli physicist and philosopher Max Jammer memorably once described it.

In fact, quantum mechanics has not just transformed our understanding of the natural world; it has immense practical ramifications too, with so-called “quantum 1.0” technologies – lasers, semiconductors and electronics – underpinning our modern world. But as was clear from the UK National Quantum Technologies Showcase in London last week, organized by Innovate UK, the “quantum 2.0” revolution is now in full swing.

The day-long event, which is now in its 10th year, featured over 100 exhibitors, including many companies that are already using fundamental quantum concepts such as entanglement and superposition to support the burgeoning fields of quantum computing, quantum sensing and quantum communication. The show was attended by more than 3000 delegates, some of whom almost had to be ushered out of the door at closing time, so keen were they to keep talking.

Last week also saw a two-day conference at the historic Royal Institution (RI) in central London that was a centrepiece of IYQ in the UK and Ireland. Entitled Quantum Science and Technology: the First 100 Years; Our Quantum Future and attended by over 300 people, it was organized by the History of Physics and the Business Innovation and Growth groups of the Institute of Physics (IOP), which publishes Physics World.

The first day, focusing on the foundations of quantum mechanics, ended with a panel discussion – chaired by my colleague Tushna Commissariat and Daisy Shearer from the UK’s National Quantum Computing Centre – with physicists Fay Dowker (Imperial College), Jim Al-Khalili  (University of Surrey) and Peter Knight. They talked about whether the quantum wavefunction provides a complete description of physical reality, prompting much discussion with the audience. As Al-Khalili wryly noted, if entanglement has emerged as the fundamental feature of quantum reality, then “decoherence is her annoying and ever-present little brother”.

Knight, meanwhile, who is a powerful figure in quantum-policy circles, went as far as to say that the limit of decoherence – and indeed the boundary between the classical and quantum worlds – is not a fixed and yet-to-be revealed point. Instead, he mused, it will be determined by how much money and ingenuity and time physicists have at their disposal.

On the second day of the IOP conference at the RI, I chaired a discussion that brought together four future leaders of the subject: Mehul Malik (Heriot-Watt University) and Sarah Malik (University College London) along with industry insiders Nicole Gillett (Riverlane) and Muhammad Hamza Waseem (Quantinuum).

As well as outlining the technical challenges in their fields, the speakers all stressed the importance of developing a “skills pipeline” so that the quantum sector has enough talented people to meet its needs. Also vital will be the need to communicate the mysteries and potential of quantum technology – not just to the public but to industrialists, government officials and venture capitalists. By many measures, the UK is at the forefront of quantum tech – and it is a lead it should not let slip.

Clear talker Jim Al-Khalili giving his Friday night discourse at the Royal Institution on 7 November 2025. (Courtesy: Matin Durrani)

The week ended with Al-Khalili giving a public lecture, also at the Royal Institution, entitled “A new quantum world: ‘spooky’ physics to tech revolution”. It formed part of the RI’s famous Friday night “discourses”, which this year celebrate their 200th anniversary. Al-Khalili, who also presents A Life Scientific on BBC Radio 4, is now the only person ever to have given three RI discourses.

After the lecture, which was sold out, he took part in a panel discussion with Knight and Elizabeth Cunningham, a former vice-president for membership at the IOP. Al-Khalili was later presented with a special bottle of “Glentanglement” whisky made by Glasgow-based Fraunhofer UK for the Scottish Quantum Technology cluster.

The post The future of quantum physics and technology debated at the Royal Institution appeared first on Physics World.

SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production

12 novembre 2025 à 17:38

“Global collaborations for European economic resilience” is the theme of  SEMICON Europa 2025. The event is coming to Munich, Germany on 18–21 November and it will attract 25,000 semiconductor professionals who will enjoy presentations from over 200 speakers.

The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing.

SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor.

Accelerating the future of electro-photonic integration with SmarAct

As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics.

Fast Scan Align
Fast Scan Align SmarAct’s high-speed and high-precision alignment solution redefines the limits of testing and packaging for integrated photonics. (Courtesy: SmarAct)

In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system.

At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability.

Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips.

Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest.

When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers.

Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike.

For more information visit SmarAct at booth B1.860 or explore more of SmarAct’s solutions in the semiconductor and photonics industry.

Optimized pressure monitoring: Efficient workflows with Thyracont’s VD800 digital compact vacuum meters

Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar.

VD800 series
VD800 Thryracont’s series combines high accuracy with a highly intuitive user interface, defining the next generation of compact vacuum meters. (Courtesy: Thyracont)

Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups.

The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file.

Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C.

The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar.

Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar.

The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics.

With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike.

To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows.

The post SEMICON Europa 2025 presents cutting-edge technology for semiconductor R&D and production appeared first on Physics World.

Intrigued by quantum? Explore the 2025 Physics World Quantum Briefing 2.0

3 novembre 2025 à 15:51

To coincide with a week of quantum-related activities organized by the Institute of Physics (IOP) in the UK, Physics World has just published a free-to-read digital magazine to bring you up to date about all the latest developments in the quantum world.

The 62-page Physics World Quantum Briefing 2.0 celebrates the International Year of Quantum Science and Technology (IYQ) and also looks ahead to a quantum-enhanced future.

Marking 100 years since the advent of quantum mechanics, IYQ aims to raise awareness of the impact of quantum physics and its myriad future applications, with a global diary of quantum-themed public talks, scientific conferences, industry events and more.

The 2025 Physics World Quantum Briefing 2.0, which follows on from the first edition published in May, contains yet more quantum topics for you to explore and is once again divided into “history”, “mystery” and “industry”.

You can find out more about the contributions of Indian physicist Satyendra Nath Bose to quantum science; explore weird phenomena such as causal order and quantum superposition; and discover the latest applications of quantum computing.

A century after quantum mechanics was first formulated, many physicists are still undecided on some of the most basic foundational questions. There’s no agreement on which interpretation of quantum mechanics holds strong; whether the wavefunction is merely a mathematical tool or a true representation of reality; or what impact an observer has on a quantum state.

Some of the biggest unanswered questions in physics – such as finding the quantum/classical boundary or reconciling gravity and quantum mechanics – lie at the heart of these conundrums. So as we look to the future of quantum – from its fundamentals to its technological applications – let us hope that some answers to these puzzles will become apparent as we crack the quantum code to our universe.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Intrigued by quantum? Explore the 2025 <em>Physics World Quantum Briefing 2.0</em> appeared first on Physics World.

Quantum computing: hype or hope?

3 novembre 2025 à 15:00

Unless you’ve been living under a stone, you can’t have failed to notice that 2025 marks the first 100 years of quantum mechanics. A massive milestone, to say the least, about which much has been written in Physics World and elsewhere in what is the International Year of Quantum Science and Technology (IYQ). However, I’d like to focus on a specific piece of quantum technology, namely quantum computing.

I keep hearing about quantum computers, so people must be using them to do cool things, and surely they will soon be as commonplace as classical computers. But as a physicist-turned-engineer working in the aerospace sector, I struggle to get a clear picture of where things are really at. If I ask friends and colleagues when they expect to see quantum computers routinely used in everyday life, I get answers ranging from “in the next two years” to “maybe in my lifetime” or even “never”.

Before we go any further, it’s worth reminding ourselves that quantum computing relies on several key quantum properties, including superposition, which gives rise to the quantum bit, or qubit. The basic building block of a quantum computer – the qubit – exists as a combination of 0 and 1 states at the same time and is represented by a probabilistic wave function. Classical computers, in contrast, use binary digital bits that are either 0 or 1.

Also vital for quantum computers is the notion of entanglement, which is when two or more qubits are co-ordinated, allowing them to share their quantum information. In a highly correlated system, a quantum computer can explore many paths simultaneously. This “massive scale” parallel processing is how quantum may solve certain problems exponentially faster than a classical computer.

The other key phenomenon for quantum computers is quantum interference. The wave-like nature of qubits means that when different probability amplitudes are in phase, they combine constructively to increase the likelihood of the right solution. Conversely, destructive interference occurs when amplitudes are out of phase, making it less likely to get the wrong answer.

Quantum interference is important in quantum computing because it allows quantum algorithms to amplify the probability of correct answers and suppress incorrect ones, making calculations much faster. Along with superposition and entanglement, it means that quantum computers could process and store vast numbers of probabilities at once, outstripping even the best classical supercomputers.

Towards real devices

To me, it all sounds exciting, but what have quantum computers ever done for us so far? It’s clear that quantum computers are not ready to be deployed in the real world. Significant technological challenges need to be overcome before they become fully realisable. In any case, no-one is expecting quantum computers to displace classical computers “like for like”: they’ll both be used for different things.

Yet it seems that the very essence of quantum computing is also its Achilles heel. Superposition, entanglement and interference – the quantum properties that will make it so powerful – are also incredibly difficult to create and maintain. Qubits are also extremely sensitive to their surroundings. They easily lose their quantum state due to interactions with the environment, whether via stray particles, electromagnetic fields, or thermal fluctuations. Known as decoherence, it makes quantum computers prone to error.

That’s why quantum computers need specialized – and often cryogenically controlled – environments to maintain the quantum states necessary for accurate computation. Building a quantum system with lots of interconnected qubits is therefore a major, expensive engineering challenge, with complex hardware and extreme operating conditions. Developing “fault-tolerant” quantum hardware and robust error-correction techniques will be essential if we want reliable quantum computation.

As for the development of software and algorithms for quantum systems, there’s a long way to go, with a lack of mature tools and frameworks. Quantum algorithms require fundamentally different programming paradigms to those used for classical computers. Put simply, that’s why building reliable, real-world deployable quantum computers remains a grand challenge.

What does the future hold?

Despite the huge amount of work that still lies in store, quantum computers have already demonstrated some amazing potential. The US firm D-Wave, for example, claimed earlier this year to have carried out simulations of quantum magnetic phase transitions that wouldn’t be possible with the most powerful classical devices. If true, this was the first time a quantum computer had achieved “quantum advantage” for a practical physics problem (whether the problem was worth solving is another question).

There is also a lot of research and development going on around the world into solving the qubit stability problem. At some stage, there will likely be a breakthrough design for robust and reliable quantum computer architecture. There is probably a lot of technical advancement happening right now behind closed doors.

The first real-world applications of quantum computers will be akin to the giant classical supercomputers of the past. If you were around in the 1980s, you’ll remember Cray supercomputers: huge, inaccessible beasts owned by large corporations, government agencies and academic institutions to enable vast amounts of calculations to be performed (provided you had the money).

And, if I believe what I read, quantum computers will not replace classical computers, at least not initially, but work alongside them, as each has its own relative strengths. Quantum computers will be suited for specific and highly demanding computational tasks, such as drug discovery, materials science, financial modelling, complex optimization problems and increasingly large artificial intelligence and machine-learning models.

These are all things beyond the limits of classical computer resource. Classical computers will remain relevant for everyday tasks like web browsing, word processing and managing databases, and they will be essential for handling the data preparation, visualization and error correction required by quantum systems.

And there is one final point to mention, which is cyber security. Quantum computing poses a major threat to existing encryption methods, with potential to undermine widely used public-key cryptography. There are concerns that hackers nowadays are storing their stolen data in anticipation of future quantum decryption.

Having looked into the topic, I can now see why the timeline for quantum computing is so fuzzy and why I got so many different answers when I asked people when the technology would be mainstream. Quite simply, I still can’t predict how or when the tech stack will pan out. But as IYQ draws to a close, the future for quantum computers is bright.

The post Quantum computing: hype or hope? appeared first on Physics World.

Quantum computing on the verge: correcting errors, developing algorithms and building up the user base

31 octobre 2025 à 15:20

When it comes to building a fully functional “fault-tolerant” quantum computer, companies and government labs all over the world are rushing to be the first over the finish line. But a truly useful universal quantum computer capable of running complex algorithms would have to entangle millions of coherent qubits, which are extremely fragile. Because of environmental factors such as temperature, interference from other electronic systems in hardware, and even errors in measurement, today’s devices would fail under an avalanche of errors long before reaching that point.

So the problem of error correction is a key issue for the future of the market. It arises because errors in qubits can’t be corrected simply by keeping multiple copies, as they are in classical computers: quantum rules forbid the copying of qubit states while they are still entangled with others, and are thus unknown. To run quantum circuits with millions of gates, we therefore need new tricks to enable quantum error correction (QEC).

Protected states

The general principle of QEC is to spread the information over many qubits so that an error in any one of them doesn’t matter too much. “The essential idea of quantum error correction is that if we want to protect a quantum system from damage then we should encode it in a very highly entangled state,” says John Preskill, director of the Institute for Quantum Information and Matter at the California Institute of Technology in Pasadena.

There is no unique way of achieving that spreading, however. Different error-correcting codes can depend on the connectivity between qubits – whether, say, they are coupled only to their nearest neighbours or to all the others in the device – which tends to be determined by the physical platform being used. However error correction is done, it must be done fast. “The mechanisms for error correction need to be running at a speed that is commensurate with that of the gate operations,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC). “There’s no point in doing a gate operation in a nanosecond if it then takes 100 microseconds to do the error correction for the next gate operation.”

At the moment, dealing with errors is largely about compensation rather than correction: patching up the problems of errors in retrospect, for example by using algorithms that can throw out some results that are likely to be unreliable (an approach called “post-selection”). It’s also a matter of making better qubits that are less error-prone in the first place.

1 From many to few

Turning unreliable physical qubits into a logical qubit
(Courtesy: Riverlane via www.riverlane.com)

Qubits are so fragile that their quantum state is very susceptible to the local environment, and can easily be lost through the process of decoherence. Current quantum computers therefore have very high error rates – roughly one error in every few hundred operations. For quantum computers to be truly useful, this error rate will have to be reduced to the scale of one in a million; especially as larger more complex algorithms would require one in a billion or even trillion error rates. This requires real-time quantum error correction (QEC).

To protect the information stored in qubits, a multitude of unreliable physical qubits have to be combined in such a way that if one qubit fails and causes an error, the others can help protect the system. Essentially, by combining many physical qubits (shown above on the left), one can build a few “logical” qubits that are strongly resistant to noise.

According to Maria Maragkou, commercial vice-president of quantum error-correction company Riverlane, the goal of full QEC has ramifications for the design of the machines all the way from hardware to workflow planning. “The shift to support error correction has a profound effect on the way quantum processors themselves are built, the way we control and operate them, through a robust software stack on top of which the applications can be run,” she explains. The “stack” includes everything from programming languages to user interfaces and servers.

With genuinely fault-tolerant qubits, errors can be kept under control and prevented from proliferating during a computation. Such qubits might be made in principle by combining many physical qubits into a single “logical qubit” in which errors can be corrected (see figure 1). In practice, though, this creates a large overhead: huge numbers of physical qubits might be needed to make just a few fault-tolerant logical qubits. The question is then whether errors in all those physical qubits can be checked faster than they accumulate (see figure 2).

That overhead has been steadily reduced over the past several years, and at the end of last year researchers at Google announced that their 105-qubit Willow quantum chip passed the break-even threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. This means that in principle such arrays could be scaled up without errors accumulating.

2 Error correction in action

Illustration of the error correction cycle
(Courtesy: Riverlane via www.riverlane.com)

The illustration gives an overview of quantum error correction (QEC) in action within a quantum processing unit. UK-based company Riverlane is building its Deltaflow QEC stack that will correct millions of data errors in real time, allowing a quantum computer to go beyond the reach of any classical supercomputer.

Fault-tolerant quantum computing is the ultimate goal, says Jay Gambetta, director of IBM research at the company’s centre in Yorktown Heights, New York. He believes that to perform truly transformative quantum calculations, the system must go beyond demonstrating a few logical qubits – instead, you need arrays of at least a 100 of them, that can perform more than 100 million quantum operations (108 QuOps). “The number of operations is the most important thing,” he says.

It sounds like a tall order, but Gambetta is confident that IBM will achieve these figures by 2029. By building on what has been achieved so far with error correction and mitigation, he feels “more confident than I ever did before that we can achieve a fault-tolerant computer.” Jerry Chow, previous manager of the Experimental Quantum Computing group at IBM, shares that optimism. “We have a real blueprint for how we can build [such a machine] by 2029,” he says (see figure 3).

Others suspect the breakthrough threshold may be a little lower: Steve Brierley, chief executive of Riverlane, believes that the first error-corrected quantum computer, with around 10 000 physical qubits supporting 100 logical qubits and capable of a million QuOps (a megaQuOp), could come as soon as 2027. Following on, gigaQuOp machines (109 QuOps) should be available by 2030–32, and teraQuOps (1012 QuOp) by 2035–37.

Platform independent

Error mitigation and error correction are just two of the challenges for developers of quantum software. Fundamentally, to develop a truly quantum algorithm involves taking full advantage of the key quantum-mechanical properties such as superposition and entanglement. Often, the best way to do that depends on the hardware used to run the algorithm. But ultimately the goal will be to make software that is not platform-dependent and so doesn’t require the user to think about the physics involved.

“At the moment, a lot of the platforms require you to come right down into the quantum physics, which is a necessity to maximize performance,” says Richard Murray of photonic quantum-computing company Orca. Try to generalize an algorithm by abstracting away from the physics and you’ll usually lower the efficiency with which it runs. “But no user wants to talk about quantum physics when they’re trying to do machine learning or something,” Murray adds. He believes that ultimately it will be possible for quantum software developers to hide those details from users – but Brierley thinks this will require fault-tolerant machines.

“In due time everything below the logical circuit will be a black box to the app developers”, adds Maragkou over at Riverlane. “They will not need to know what kind of error correction is used, what type of qubits are used, and so on.” She stresses that creating truly efficient and useful machines depends on developing the requisite skills. “We need to scale up the workforce to develop better qubits, better error-correction codes and decoders, write the software that can elevate those machines and solve meaningful problems in a way that they can be adopted.” Such skills won’t come only from quantum physicists, she adds: “I would dare say it’s mostly not!”

Yet even now, working on quantum software doesn’t demand a deep expertise in quantum theory. “You can be someone working in quantum computing and solving problems without having a traditional physics training and knowing about the energy levels of the hydrogen atom and so on,” says Ashley Montanaro, who co-founded the quantum software company Phasecraft.

On the other hand, insights can flow in the other direction too: working on quantum algorithms can lead to new physics. “Quantum computing and quantum information are really pushing the boundaries of what we think of as quantum mechanics today,” says Montanaro, adding that QEC “has produced amazing physics breakthroughs.”

Early adopters?

Once we have true error correction, Cuthbert at the UK’s NQCC expects to see “a flow of high-value commercial uses” for quantum computers. What might those be?

In this arena of quantum chemistry and materials science, genuine quantum advantage – calculating something that is impossible using classical methods alone – is more or less here already, says Chow. Crucially, however, quantum methods needn’t be used for the entire simulation but can be added to classical ones to give them a boost for particular parts of the problem.

IBM and RIKEN quantum systems
Joint effort In June 2025, IBM in the US and Japan’s national research laboratory RIKEN, unveiled the IBM Quantum System Two, the first to be used outside the US. It involved IBM’s 156-qubit IBM Heron quantum computing system (left) being paired with RIKEN’s supercomputer Fugaku (right) — one of the most powerful classical systems on Earth. The computers are linked through a high-speed network at the fundamental instruction level to form a proving ground for quantum-centric supercomputing. (Courtesy: IBM and RIKEN)

For example, last year researchers at IBM teamed up with scientists at several RIKEN institutes in Japan to calculate the minimum energy state for the iron sulphide cluster (4Fe-4S) at the heart of the bacterial nitrogenase enzyme that fixes nitrogen. This cluster is too big and complex to be accurately simulated using the classical approximations of quantum chemistry. The researchers used a combination of both quantum computing (with IBM’s 72-qubit Heron chip) and RIKEN’s Fugaku high performance computing (HPC). This idea of “improving classical methods by injecting quantum as a subroutine” is likely to be a more general strategy, says Gambetta. “The future of computing is going to be heterogeneous accelerators [of discovery] that include quantum.”

Likewise, Montanaro says that Phasecraft is developing “quantum-enhanced algorithms”, where a quantum computer is used, not to solve the whole problem, but just to help a classical computer in some way. “There are only certain problems where we know quantum computing is going to be useful,” he says. “I think we are going to see quantum computers working in tandem with classical computers in a hybrid approach. I don’t think we’ll ever see workloads that are entirely run using a quantum computer.” Among the first important problems that quantum machines will solve, according to Montanaro, are the simulation of new materials – to develop, for example, clean-energy technologies (see figure 4).

“For a physicist like me,” says Preskill, “what is really exciting about quantum computing is that we have good reason to believe that a quantum computer would be able to efficiently simulate any process that occurs in nature.”

3 Structural insights

Modelling materials using quantum computing
(Courtesy: Phasecraft)

A promising application of quantum computers is simulating novel materials. Researchers from the quantum algorithms firm Phasecraft, for example, have already shown how a quantum computer could help simulate complex materials such as the polycrystalline compound LK-99, which was purported by some researchers in 2024 to be a room-temperature superconductor.

Using a classical/quantum hybrid workflow, together with the firm’s proprietary material simulation approach to encode and compile materials on quantum hardware, Phasecraft researchers were able to establish a classical model of the LK99 structure that allowed them to extract an approximate representation of the electrons within the material. The illustration above shows the green and blue electronic structure around red and grey atoms in LK-99.

Montanaro believes another likely near-term goal for useful quantum computing is solving optimization problems – both here and in quantum simulation, “we think genuine value can be delivered already in this NISQ era with hundreds of qubits.” (NISQ, a term coined by Preskill, refers to noisy intermediate-scale quantum computing, with relatively small numbers of rather noisy, error-prone qubits.)

One further potential benefit of quantum computing is that it tends to require less energy than classical high-performance computing, which is notoriously high. If the energy cost could be cut by even a few percent, it would be worth using quantum resources for that reason alone. “Quantum has real potential for an energy advantage,” says Chow. One study in 2020 showed that a particular quantum-mechanical calculation carried out on a HPC used many orders of magnitude more energy than when it was simulated on a quantum circuit. Such comparisons are not easy, however, in the absence of an agreed and well-defined metric for energy consumption.

Building the market

Right now, the quantum computing market is in a curious superposition of states itself – it has ample proof of principle, but today’s devices are still some way from being able to perform a computation relevant to a practical problem that could not be done with classical computers. Yet to get to that point, the field needs plenty of investment.

The fact that quantum computers, especially if used with HPC, are already unique scientific tools should establish their value in the immediate term, says Gambetta. “I think this is going to accelerate, and will keep the funding going.” It is why IBM is focusing on utility-scale systems of around 100 qubits or so and more than a thousand gate operations, he says, rather than simply trying to build ever bigger devices.

Montanaro sees a role for governments to boost the growth of the industry “where it’s not the right fit for the private sector”. One role of government is simply as a customer. For example, Phasecraft is working with the UK national grid to develop a quantum algorithm for optimizing the energy network. “Longer-term support for academic research is absolutely critical,” Montanaro adds. “It would be a mistake to think that everything is done in terms of the underpinning science, and governments should continue to support blue-skies research.”

IBM roadmap of quantum development
The road ahead IBM’s current roadmap charts how the company plans on scaling up its devices to achieve a fault-tolerant device by 2029. Alongside hardware development, the firm will also focus on developing new algorithms and software for these devices. (Courtesy: IBM)

It’s not clear, though, whether there will be a big demand for quantum machines that every user will own and run. Before 2010, “there was an expectation that banks and government departments would all want their own machine – the market would look a bit like HPC,” Cuthbert says. But that demand depends in part on what commercial machines end up being like. “If it’s going to need a premises the size of a football field, with a power station next to it, that becomes the kind of infrastructure that you only want to build nationally.” Even for smaller machines, users are likely to try them first on the cloud before committing to installing one in-house.

According to Cuthbert , the real challenge in the supply-chain development is that many of today’s technologies were developed for the science community – where, say, achieving millikelvin cooling or using high-power lasers is routine. “How do you go from a specialist scientific clientele to something that starts to look like a washing machine factory, where you can make them to a certain level of performance,” while also being much cheaper, and easier to use?

But Cuthbert is optimistic about bridging this gap to get to commercially useful machines, encouraged in part by looking back at the classical computing industry of the 1970s. “The architects of those systems could not imagine what we would use our computation resources for today. So I don’t think we should be too discouraged that you can grow an industry when we don’t know what it’ll do in five years’ time.”

Montanaro too sees analogies with those early days of classical computing. “If you think what the computer industry looked like in the 1940s, it’s very different from even 20 years later. But there are some parallels. There are companies that are filling each of the different niches we saw previously, there are some that are specializing in quantum hardware development, there are some that are just doing software.” Cuthbert thinks that the quantum industry is likely to follow a similar pathway, “but more quickly and leading to greater market consolidation more rapidly.”

However, while the classical computing industry was revolutionized by the advent of personal computing in the 1970s and 80s, it seems very unlikely that we will have any need for quantum laptops. Rather, we might increasingly see apps and services appear that use cloud-based quantum resources for particular operations, merging so seamlessly with classical computing that we don’t even notice.

That, perhaps, would be the ultimate sign of success: that quantum computing becomes invisible, no big deal but just a part of how our answers are delivered.

  • In the first instalment of this two-part article, Philip Ball explores the latest developments in the quantum-computing industry

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum computing on the verge: correcting errors, developing algorithms and building up the user base appeared first on Physics World.

❌