I oversee a lab where engineers try to destroy my life’s work. It’s the only way to prepare for quantum threats

The first time I handed over my credit card to a security lab, it came back to me broken. Not physically damaged, but compromised. In less than 10 minutes, the engineers had discovered my PIN.

This happened in the early 1990s, when I was a young engineer starting an internship at one of the companies that helped create the smart card industry. I believed my card was secure. I believed the system worked. But watching strangers casually extract something that was supposed to be secret and protected was a shock. It was also the moment I realized how insecure security actually is, and the devastating impact security breaches could have on individuals, global enterprises, and governments.

Most people assume security is about building something that’s unbreakable. In reality, security is about understanding exactly how something breaks, under what conditions, and how quickly. That is why, today, I run labs where engineers are paid to attack the very chips my company designs. They measure power fluctuations, inject electromagnetic signals, fire lasers, and strip away layers of silicon. Their job is to behave like criminals and hostile nation-states on purpose, because the only honest way to build trust is to try to destroy it first.

To someone outside the security world, this approach sounds counterintuitive. Why spend years designing secure hardware, only to invite people to tear it apart? The answer is straightforward: Trust that has never been tested is not trust. It is assumption. Assumptions fail quietly at first, and they fail at the worst possible moment.

Over the past three decades, I have watched secure chips move from a specialized technology into invisible infrastructure. Early in my career, much of my work focused on payment cards. Convincing banks and payment networks that a chip was safer than a magnetic stripe was not easy. At the time, there were fears about surveillance and tracking. What few people recognized was that these chips were becoming digital passports. They proved identity, authenticated devices, and determined what could and could not be trusted on a network.

Today, secure chips sit quietly inside credit cards, smartphones, cars, medical devices, home routers, industrial systems, and national infrastructure. Most people never notice them, which is often taken as a sign of success. In reality, that invisibility also creates risk. When security disappears from view, it is easy to forget that it must still evolve.

At a basic level, a secure chip does one essential thing. It protects a secret – a cryptographic identity that proves a device is genuine. All other security measures build upon that foundation. When a phone unlocks, when a car communicates with a charging station, when a medical sensor sends data to a hospital, or when a software update is delivered to a device in the field, all of those actions depend on that secret remaining secret.

The challenge is that chips do not simply store secrets. They use them. They calculate, communicate, and respond. The moment a chip does that, it begins to leak information. Not because it is poorly designed, but because physics cannot be negotiated. Power consumption shifts. Electromagnetic emissions change. Timing varies. With the right equipment and enough expertise, those signals can be measured and interpreted.

This is what happens inside our attack labs every day. Engineers listen to chips in much the same way an electricity provider can infer your daily routine from your power usage. They stress-test devices until they behave differently than intended. They introduce faults and observe how the chip responds. From those observations, they learn how an attacker would think, where information escapes, and how defenses must be redesigned.

Quantum computing enters this picture without drama or science fiction. Quantum does not change what attackers are after – they still want the secret. What quantum changes is the speed at which they can get it. Problems that would take classical computers thousands of years can collapse to minutes or seconds once sufficient quantum capability exists. The target remains the same. The timeline disappears.

This is why static security fails. Any system designed to be secure once and then left untouched is already aging toward obsolescence. If a system is never attacked, it will eventually fail, because the world around it does not stand still. Attack techniques evolve and improve. Tools become cheaper, more powerful, and more accessible – especially in the age of Artificial Intelligence. Knowledge about successful attacks spread globally, emboldening others to seek similar successes. 

Many organizations make the same mistake. They assume they will see the threat coming. They wait for visible breaches or public incidents before acting. With quantum, that logic breaks down. The first actors with meaningful quantum capability will not announce it. They will use it quietly. In fact, this is already happening now with Harvest Now-Decrypt Later (HNDL) attacks, where large amounts of encrypted data is collected and stored today for future quantum decryption. By the time attacks become obvious, the damage will already be done.

That reality is why governments and regulators are moving now. Across industries, requirements are emerging that systems must become quantum resilient within defined timelines. This is not driven by theory or hype. It is driven by the simple fact that updating cryptography, hardware, and infrastructure takes years, while exploiting weaknesses can take moments.

When I walk through our labs today, what strikes me most is not the sophistication of the tools, but the discipline of the process. Access is tightly controlled. Engineers are vetted and audited. Every experiment is documented. This is not curiosity-driven hacking. It is structured, repeatable testing designed to surface weaknesses early, while there is still time to fix them. Every successful attack becomes an input for a stronger design.

This is what leaders, system owners, and policymakers need to understand. Security does not fail suddenly. It fails quietly, long before anyone notices. Preparing for quantum threats is not about predicting the exact moment a breakthrough occurs. It is about accepting that once it does, there will be no grace period. The only responsible approach is to assume your systems will be attacked and to make sure that happens under controlled conditions, before someone else decides the timing for you.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

#oversee #lab #engineers #destroy #lifes #work #prepare #quantum #threats

发表评论

您的电子邮箱地址不会被公开。