The Foundation Beneath the Foundation

Sovereign in Name. Dependent by Design.

AI Strategy Sovereignty

Makeljana Shkurti

Business Strategy

Makeljana leads business strategy at VRULL and chairs the RISC-V International AI & ML Market Development Committee. A frequent speaker on technology strategy, she connects the commercial reality of AI silicon with the ecosystem decisions that make it viable.

This piece was inspired by a recent article in The Economist on Nvidia’s ambition to become a “foundational company” for the AI economy. I have been thinking about this for a while. At VRULL, we engage closely with the EuroHPC ecosystem and work daily at the layer where these dependencies stop being theoretical and become someone’s engineering problem. This is that problem, written down.


Nvidia has done something remarkable. In less than a decade, it became the indispensable infrastructure of the artificial intelligence age. Its GPUs power the models that write, reason, diagnose, and decide. Its networking connects the data centres those models live in. Its software frameworks — CUDA above all — became the language in which AI is written. Jensen Huang did not just sell shovels in a gold rush. He built the mine, the railroad, and the assay office.

This is a genuine achievement. The market opportunity Nvidia has unlocked is real, and the technical foundations it has laid are extraordinary. But there is a question that deserves more attention — not about Nvidia, but about what “foundational” actually means for the organisations building on top of it.

What foundational really means

According to a recent piece in The Economist, Jensen Huang has described AI as a “five-layer cake” spanning energy, chips and computing infrastructure, networking, AI models, and applications — and Nvidia, he has made clear, intends to play a role in every single layer of that stack. Not merely to sell chips into it, but to be present across the entire structure simultaneously — the infrastructure on which the AI economy runs.

This is not a boast. It is a strategy, and it is being executed with remarkable discipline. Vertical integration is a rational response to a market where the layers are deeply interdependent and where performance gains come from optimising across all of them simultaneously. Nvidia is right that building AI systems without tight integration introduces friction that costs performance and speed.

But vertical integration has a second-order effect that tends to announce itself slowly, and then all at once. It transfers leverage.

When you adopt a vertically integrated stack, you are not simply purchasing better performance. You are making a series of compounding decisions: which chips to train on, which software frameworks to write in, which networking infrastructure to deploy, which model architectures to optimise for. Each decision is individually reasonable. Collectively, they define the shape of your future optionality — and they transfer leverage to the company whose stack you are running on.

The three risks that don’t appear on the product roadmap

The supply chain is a single point of failure. Nvidia’s chips are manufactured by TSMC in Taiwan, using memory from a handful of suppliers. Advanced memory is already sold out for this year and much of next. Access to the hardware your AI strategy depends on is not guaranteed — it is rationed, priced dynamically, and subject to forces entirely outside your control. Supply chain fragility is not theoretical. The semiconductor industry has experienced acute shortages repeatedly. Each time, the organisations most exposed were those with the fewest alternatives. Concentration in a supply chain does not merely increase cost when something goes wrong. It determines whether you can operate at all.

Export controls are someone else’s foreign policy. Since 2022, Nvidia has been barred from selling its most advanced chips to China — not by choice, but by Washington. That is the nature of strategically important technology: governments regulate it. The rules changed. Nvidia’s customers in the affected geographies had no say. You do not need to be in a sanctioned country for this to matter. You need only to be in a world where the rules can change and your infrastructure cannot.

Lock-in compounds quietly. CUDA is exceptional software. It is also the reason switching away from Nvidia is not a procurement decision — it is an engineering project. Models trained on CUDA, pipelines built in CUDA, optimisations written in CUDA do not move freely to other hardware. With every additional layer of Nvidia integration — chips, networking, models, the frameworks used to build applications — the cost of change compounds. The organisations most deeply embedded in the Nvidia stack are not simply customers. They are tenants. The building is excellent. But they do not own the keys.

Europe is trying. But it is buying the problem, not solving it.

Europe has recognised the strategic stakes. The EuroHPC Joint Undertaking, established in 2018, has built three of the world’s ten most powerful supercomputers — JUPITER in Germany (4th), LUMI in Finland (9th), and Leonardo in Italy (10th). The EU’s AI Factories programme — launched in 2024 and now encompassing 19 sites across 16 member states, alongside 13 AI Factory Antennas in a further 7 — represents a serious, well-funded attempt to build European AI capability. The InvestAI initiative aims to mobilise €200 billion for AI investment across Europe, with €20 billion dedicated to a new fund for AI Gigafactories. On 22 January 2026, the European Parliament voted 471 to 68 to formally direct the Commission to reduce the bloc’s dependence on foreign technology across the entire stack — semiconductors, cloud, software, and AI.

The intent is right. The execution has a fundamental flaw.

Partnership consortia across the AI Factories are primarily composed of research institutions, rather than commercial actors. The hardware filling those facilities is overwhelmingly non-European. Amazon, Microsoft, and Google alone command around 70% of the European cloud market, with local providers accounting for roughly 15%. And the chips running the AI workloads in Europe’s new sovereign facilities are, in the vast majority of cases, designed in California and manufactured in Taiwan.

This is not a failure unique to any single programme. It is a structural pattern. Europe has consistently chosen to procure AI capability rather than build it — importing the stack rather than developing the engineering base that would make an independent stack possible. The EU Chips Act committed €43 billion to rebuilding European semiconductor capacity, targeting 20% of global chip manufacturing by 2030. The European Court of Auditors has reported that target will almost certainly be missed, forecasting a more likely outcome of around 11.7% — and noting that meeting the original goal would require approximately quadrupling current production capacity. Even the flagship TSMC fab in Dresden — backed by €5 billion in German and EU aid — keeps its core process technology and intellectual property under Taiwanese control.

Meanwhile, the hyperscalers Europe relies on for cloud and AI infrastructure are deepening their own positions. Microsoft, Google, and Amazon are collectively spending hundreds of billions on AI infrastructure — building the capacity that European institutions will then rent access to, at terms set elsewhere. According to the European Parliament’s own resolution, the EU relies on non-EU countries for over 80% of its digital products, services, infrastructure, and intellectual property — a dependency that, as MEP Michał Kobosko put it bluntly, risks turning Europe into a “digital colony.”

The data is onshore. The stack is not.

EuroHPC and the AI Factories are necessary. They are not sufficient. Filling European buildings with non-European hardware and non-European software, then calling it sovereign AI, is not a strategy. It is a procurement exercise. There is no credible path to European AI independence that does not run through a European semiconductor industry — chips designed in Europe, enabled by software built in Europe, maintained by engineers whose work is not subject to the licensing decisions of companies headquartered elsewhere. Without that foundation, every sovereign AI programme is an expensive way of being dependent in a slightly different location.

The software layer is where the real dependency lives

This point tends to get lost in conversations about AI sovereignty and supply chain risk, because it is less visible than hardware and less dramatic than geopolitics. But it is arguably more consequential than either.

The deepest form of lock-in is not the chip. Hardware can be replaced, expensively but eventually. The deepest lock-in is in the software that only runs on that hardware — the optimised kernels, the compiler backends, the framework integrations that exploit hardware-specific capabilities. When that software exists only for one architecture, the architecture becomes permanent.

This is the layer where we work. At VRULL, our engineers define ISA extensions for AI acceleration before silicon is committed, write the compiler backends that make those extensions usable, and build the framework integrations — PyTorch, TensorFlow, ONNX Runtime — that close the gap between hardware capability and inference performance. The same team that designs the extension writes the compiler that targets it. From architecture to toolchain to production, every layer informs the others. We have written before about what ecosystem maturity actually requires — this is the work that delivers it.

That work lives upstream — in GCC, LLVM, QEMU, the Linux kernel — not in a proprietary silo. Upstream authority matters here: it means the software is maintained openly, accessible to anyone building on the same foundation, not controlled by a single vendor.

The question worth asking

The practical answer for most organisations is not to avoid any particular vendor. For large-scale model training in particular, the leading hardware options remain the best for many workloads, and building around that fact is expensive and often counterproductive. The question is how to avoid a dependency structure that makes any single foreign vendor irreplaceable — in your supply chain, in your software stack, and in the strategic calculus of your organisation’s future.

That means investing, in parallel, in the software layer that keeps your options open. It means ensuring that the AI frameworks your teams build in are not optimised exclusively for one architecture. It means that when new silicon — from a European vendor, from the emerging RISC‑V ecosystem, or from the next wave of open architectures — becomes available, you can actually move to it without rewriting your entire infrastructure from scratch.

For European institutions specifically — HPC centres, AI Factories, national AI programmes — it means asking harder questions of their infrastructure investments: What software stack comes with this? Who maintains it? Is it open? Can we port it? Who controls the compiler? These are not exotic questions. They are the questions that determine whether a sovereignty programme is real or theatrical.

Europe has the funding, the infrastructure, and the political will. What it still needs is the commitment to build the semiconductor and software ecosystem that makes that infrastructure genuinely independent, rather than an expensive, well-intentioned replica of the dependency it was designed to escape.

The foundation beneath the foundation — the compiler, the toolchain, the AI framework, the upstream authority in the codebases that matter — is where genuine AI independence is built or forfeited. That work is happening now, in Vienna. But it cannot happen at the scale Europe needs without being recognised for what it is: not a niche technical service, but a strategic necessity.


VRULL is a compiler and AI software engineering firm based in Vienna, Austria. We build the software stack between silicon and AI workloads, with a focus on RISC‑V and ARM architectures. Reach out at contact@vrull.eu or visit vrull.eu.