The Question
Intel, the company that put the “silicon” in Silicon Valley, the technological powerhouse Gordon Moore and Andy Grove built, has fallen from grace and has become a candidate for dramatic restructuring. This raises one of recent history's most dramatic business strategy questions: What went wrong?
The dominant narrative in the financial press is that “financialization” damaged Intel. That is, a focus on stock buybacks and dividends rather than R&D hurt the company. This is basically wrongheaded as the story told here will demonstrate.
A second common narrative is that the board was filled with generalists who lacked understanding of the semiconductor business. This has been true, but was not the central issue.
A third common explanation is that bureaucracy strangled innovation. Intel certainly had bureaucracy, but was it the critical problem?
A fourth explanation is that Intel was “disrupted from below” by the ARM ecosystem which arose within the mobile space. This has an element of truth but does not explain Intel’s stumbles in graphics, AI, and its dramatic loss of leadership in its core CPU business in the 2016-24 era.
The hard truth is that Intel engaged in one of the world’s most coherent specialized and successful business strategies over almost three decades. It was this internally coherent specialization that led to its misses and failures.
Recent Events
In late 2024, Intel’s board gave CEO Pat Gelsinger the classic “quit or we will do it for you” ultimatum. Gelsinger had returned to Intel in 2021 to tackle the task of fixing everything, and the board had approved his ten-year turnaround plan. But just three years in, the negative numbers, the falling stock price, and the AI frenzy gripping the tech sector pushed the board to pull the plug.
Some say the board panicked, driven by short-term financial considerations. But to be fair, the numbers were ugly. Intel had lost $16.6 billion in a single quarter, marking the most significant loss in its history. Revenue decreased by 30% compared to 2021. More crucially, Intel’s yield on new chips continued at less than 10%, while competitor TSMC’s yield seemed to be achieving three times better results. Was there any daylight at the end of this tunnel?
Following Gelsinger's departure, months of discussion ensued about potential deals with TSMC, spinning off the foundry business, or other restructuring options. Nevertheless, in March 2025, Intel’s board appointed Lip-Bu Tan as CEO. Tan had previously served on Intel’s board, was a long-time technology investor, and had decades of experience in the semiconductor and software industries. Like Gelsinger before him, he promised to restore Intel’s semiconductor foundry business to its former luster and lead the company to a significant position in AI.
The Beginning: DRAMs To Microprocessors
In 1968, Gordon Moore and Robert Noyce left Fairchild to establish Intel. Their first hire was Andy Grove, who later became CEO in 1987. Noyce had invented the integrated circuit, and three years earlier, Moore had written an article for Electronics Magazine noting that advances in lithography allowed the number of transistors on a wafer to double every year. Since costs were primarily per wafer, the cost per transistor was halved with each doubling. (A decade later, he revised this to every two years.) This consistent reduction in the cost, size, and power consumption of each transistor became known as “Moore’s Law.”
Intel’s first product was a memory chip. As the company grew, it began to implement the benefits of Moore’s Law. By the mid-1970s, the company dominated the rapidly growing DRAM memory chip market. Then, in 1971, Intel began producing its first microprocessor: the 4-bit 4004, designed to function in a calculator. Intel soon realized this programmable chip was a general-purpose device with potential uses beyond calculators. It produced the 8-bit Intel 8080 in 1974, key in inaugurating the “home computer” revolution (e.g., the Altair 8800). By 1978, it produced the 8086, which exhibited the original “x86” architecture, a design that IBM adopted for its 1981 Personal Computer.
Intel's memory (DRAM) market position deteriorated significantly during the early 1980s. Its market share dropped from over 80% in the 1970s to just 2-3% by 1984, primarily due to competition from Japanese manufacturers. These competitors sold high-quality DRAM chips at lower prices, and the products of different companies became indistinguishable commodities.
The impact on Intel was severe. Intel’s earnings per share plummeted to $0.01 in 1985, followed by a $173 million loss in 1986. In response, it implemented layoffs and closed plants. Grove and Noyce began discussing the famous decision to exit the memory business, initially driving the company’s success. This choice was facilitated by the fact that the microprocessor business was both profitable and expanding.
Intel’s struggle to compete effectively against Japanese DRAM producers was part of a broader leadership shift in selected manufacturing areas from the United States to Japan, and later to Korea, Taiwan, and China. Japanese producers operated more automated chip foundries and invested significantly more in both the statistical and human aspects of tighter process control. This resulted in fewer defective chips per wafer and lower costs per chip. (Grove later acknowledged that he could not match the Japanese quality control, referring to it as a “manufacturing shock.”) Furthermore, Intel maintained a bloated management structure compared to its Japanese competitors. Its 1986 Annual Report stated that the company was “left with an overhead structure appropriate to the $2–3 billion company we aimed to be rather than the $1.0–1.5 billion company we were becoming.”
Interestingly, Grove evaluated the DRAM business using fully allocated costs, rather than just direct costs. (For most years, DRAM revenue did cover direct costs.) An alternative could have been to follow the suggestions of some Intel engineers and build a modern low-overhead $100 million plant to produce 1-megabit DRAMs to the highest global standards. While this wouldn't generate large profits, it would keep Intel at the forefront of semiconductor manufacturing efficiency. However, such a move would have conflicted with the management and strategy concepts that were popular at that time. Success, it was taught, stemmed from having a technical edge or differentiation advantage, not from competing on cost.
The Wintel Standard and Pushing Moore
The remarkable success of IBM’s PC and its clones spurred demand for Intel’s x86 processors. These machines combined the x86 architecture with Microsoft’s operating systems, creating a de facto standard for small computers. As IBM’s prominence waned, the Intel chip and the Microsoft operating system defined this family of computers, regardless of brand. When Microsoft Windows emerged, this lock-in became known as the “Wintel” standard.
During the 1990s and early 2000s, Intel enjoyed fabulous profit margins. While most chip makers had gross margins ranging from 3% to 22%, Intel’s were 50% to 60%. Much of this profitability was clearly due to the Wintel standard. At the same time, managers within the company attributed much of the company’s success to its mastery of Moore’s Law.
In practice, Moore's Law was implemented as a coordinated roadmap for the semiconductor industry, encompassing lithography, materials, EDA tools, ASML lithography equipment, and Applied Materials manufacturing machinery. Each node in the roadmap represented the smallest feature size on a microchip, measured in nanometers (nm).
Despite the availability of this standard roadmap, Intel managed to stay approximately 18 months ahead of its competitors in the quest for the next process node between 1990 and 2009, as evidenced by the transitions from 90nm to 65nm, 45nm, and 32nm. Its smaller features enabled industry-leading transistor speeds. Thus, Intel CPUs (like the Pentium, Core 2 Duo, and Core i7) consistently led industry-standard benchmarks for single-thread performance and power efficiency.
If the industry coordinated on a standard “roadmap,” how could Intel outpace rivals like IBM and AMD in the race to the next node?
Intel’s Focused Strategy
At the center of the strategy was Intel’s commitment to operating its own semiconductor foundry. Many other semiconductor companies had transitioned or were transitioning to fabless models, relying on merchant foundries such as TSMC and GlobalFoundries. Having its own fab allowed Intel to avoid coordination delays and maximize optimization opportunities that fabless competitors faced. However, and more importantly, it enabled close coordination between circuit design and manufacturing process engineering. This close coordination focused intently on the speed and performance of its x86 CPUs.
As a manufacturer, Intel made substantial investments in process R&D. For instance, in 2011, Intel spent $8.4 billion on R&D, exceeding competitor AMD’s annual revenue. Nearly all of this investment was aimed at enhancing x86 CPU speed and power. In contrast, support chips were produced using less advanced processes and nodes. This included platform controller hubs (I/O including USB), integrated graphics, and network interface controllers (Ethernet).
Intel established a close integration between circuit design and fabrication processes by employing highly customized design rules. Users of merchant foundries were required to adhere to the rules provided by the foundry, which were designed to ensure first-pass silicon success. In contrast, Intel developed advanced lithography processes in tandem with design, frequently overcoming timing bottlenecks by aligning process and design efforts.
Intel developed and maintained proprietary chip design tools for routing (where metal connections are laid out), placement (where logic gates are situated), timing analysis, and power and thermal analysis. In particular, Intel’s custom power grids required logic that commercial design tools could not manage.
Because Intel controlled both chip design and fabrication, it could simply ban the use of designs that created problems for lithography by imposing design rules that a merchant fab could not so easily enforce. As one analysis noted, “Intel is very comfortable with incredibly restrictive design rules since they are a microprocessor manufacturer and not a pure-play foundry. Intel can micromanage every aspect of design and manufacturing...”1
In practice, this meant that Intel's chip layouts tended to be grid-like and uniform, boosting yields on cutting-edge processes while constraining circuit designers to a limited range of geometries. This tight integration also allowed Intel to take risks with new materials and transistor structures, such as high-k dielectrics, strained silicon, and FinFETs, ahead of other foundries.
Funding this level of foundry skill and design integration did not come cheaply. Intel spent heavily on research for process and design. For example, in 2011, Intel’s R&D spending was $8.4 billion, more than AMD’s total revenue that year. This spending was worthwhile as long as the market rewarded a performance edge in x86 CPUs.
The Costs of Intel’s Focused Strategy
Intel attempted to expand its product scope at least three times, each failing. These forays included high-performance graphics processors (GPUs), mobile chipsets, and AI chips. There were several intertwined reasons for this lack of success.
Intel’s intense focus on CPU performance hindered its development of more complex multi-function chips and systems. When attempting to compete with NVIDIA’s graphics GPUs in 1998, Intel’s i740 fell short because Intel’s design rules were optimized for high-speed CPUs rather than GPUs. While CPUs require tight, fast logic, GPUs necessitate wide data paths and high memory throughput. Intel designers typically used structured, automated digital layout software not programmed to accommodate GPU-specific circuits like shading pipelines and rasterization engines. Intel’s metal layout and transistor sizing rules prevented the i740 from achieving the wide data paths and custom logic blocks NVIDIA engineers had hand-tuned.
In the mobile market, Intel was slow and late in providing an integrated SoC (system on a chip) solution. Mobile phone manufacturers sought a complete system that included a CPU, power management, wireless communication, Wi-Fi and Bluetooth radios, screen image processing, high-quality image processing for the camera, position and motion sensors, and more. While Qualcomm (and Apple) supplied all these on a single chip, Intel could not provide a fully integrated solution. Its Medfield (2012) and Clover Trail (2013) offerings used an x86-based CPU but failed to integrate an LTE modem, advanced image processing, Wi-Fi and Bluetooth radios, power management, or a sensor hub.
As new market opportunities appeared in high-performance graphics and mobile devices, Intel planners tended to see faster, cheaper, smaller versions of the x86 CPU as eventually overtaking competitors’ designs. This mindset was evident in its attempts to design mobile SoCs around the x86 and create GPUs as arrays of x86 cores.
Intel’s high profit margins in x86 CPUs financed an expansive administration reluctant to invest in lower-margin opportunities. Historically, Intel maintained strict profitability criteria for new ventures, which limited investments in areas seen as low-margin or risky. One notable example was CEO Otellini’s rejection of Apple’s request to manufacture its iPhone processor. Other instances included the lack of focused effort in graphics GPUs until very late and an unwillingness to fully commit to integrated mobile SoCs and modems for the mobile market.
A Failed Bet at 10nm
During the mid-2010s, the chip industry began to face the limits of lithography using the then-standard 193nm wavelength ultraviolet (UV) tools. Engineers used various techniques to create small (< 20nm) features with 193nm light. Masks were distorted so that their interference patterns would indirectly produce the desired shapes. Near features were patterned in separate steps so that each would not interfere with the other. The wafer was immersed in pure water to sharpen images. The hoped-for solution to these difficulties was extreme ultraviolet (EUV) lithography, using light with a wavelength of 13.5nm. But EUV technology had not yet matured, and no one could predict when it would be available.
In this environment, Intel’s technical leaders began to voice skepticism about EUV lithography being ready in time for its 10nm node planned for 2016-17. Because Intel prided itself on leadership in the highest power and performance chips, its technical leaders developed a strategy for succeeding at 10nm with 193nm UV lithography. The basic idea was aggressive multiple patterning and new techniques like self-aligned quadruple patterning (SAQP) and even quintuple/sextuple patterning on specific layers. New materials such as cobalt were adopted for interconnects. Intel also introduced COAG (Contact Over Active Gate), positioning the contact directly atop the transistor gate to conserve space, and utilized a design featuring a “single dummy gate.”
Intel believed that these methods would allow it to significantly increase transistor density (~2.7× over 14nm), effectively outpacing competitors' plans. Intel insiders referred to their 10nm as a “7nm-class” technology. The expectation was that even if TSMC or Samsung rolled out EUV a bit later at their 7nm, Intel’s 10nm (on schedule for 2016–2017) would still achieve higher transistor density and smaller die sizes first. Intel officials expressed confidence that its multi-patterned 10nm would provide economic benefits that foundries, hampered by multi-pattern costs or EUV delays, might not be able to match.
By 2018, Intel’s expectations were challenged. Low yield rates for its 10nm node restricted production and angered buyers. At the same time, TSMC successfully ramped up a 7nm process in 2018 and planned a modest EUV-based 7nm+ for 2019, while Samsung was gearing up its EUV-based 7nm for release the same year. Thus, as competitors began to adopt EUV, Intel was stuck resolving issues with its 10nm process and postponed its 7nm node to the early 2020s. Intel was no longer ahead. It wound up with refreshed 14nm products competing with 7nm offerings from rivals.
Intel started using EUV for its 7nm node, aiming for production in 2021. However, production issues emerged again, pushing back volume production to late 2023. The main challenges included mask defects, partly caused by Intel’s dependence on its proprietary design rules, software, EDA tools, and workflows. In contrast, the shift to EUV demanded considerably more collaboration with external partners (including ASML, Cadence, Synopsys, etc.), which conflicted with Intel’s specialized approach.
A New World
In 2025, Intel is opening its foundry to other chip designers and challenging TSMC’s 2nm node with its 18A (1.8nm) process node.
The technical bet at 18A uses EUV patterning combined with two process innovations: PowerVia and RibbonFET. The PowerVia idea moves power to the back of the wafer, a potential gain in efficiency, especially for high-performance CPUs and AI chips. RibbonFET is expected to increase performance per watt and be better than TSMC’s nanosheet methods. As in the past, Intel is trying to stay ahead of competitors with process innovations.
A successful implementation of 18A would mark a significant milestone for Intel. However, the company must address several intricate challenges beyond simply reaching the technical node to reclaim long-term industry leadership.
Customer Trust. Its past 10nm and 7nm struggles have damaged Intel’s customer credibility, especially for its merchant foundry business.
Non-Central x86. To be successful as a merchant, Intel will have to put aside its aversion to alternative processor designs. It cannot count on the x86 architecture to take it into the future.
Competition. TSMC has emerged as the dominant high-volume high-performance foundry. Samsung’s foundry has carved out a position as a lower-price leader. Regaining leadership from these competitors will be difficult.
Ecosystem. For Intel to thrive in the merchant foundry industry, it must build a strong, adaptable, and competitive ecosystem of tools, libraries, and IP to entice customers used to TSMC or Samsung’s offerings. Relying on its conventional proprietary design rules and practices will not suffice. Intel must facilitate the seamless and dependable integration of essential IP blocks (e.g., ARM cores, GPUs, high-speed I/O) that customers anticipate.
Costs. Intel must manage capital efficiency and trim its traditionally large management structure.
https://semiwiki.com/semiconductor-manufacturers/intel/1915-intel-22nm-soc-process-exposed/
Thanks, Professor Rumelt!
I would add to your analysis by noting that Intel was actually an early visionary and significant investor in EUV lithography (investing $1 billion in ASML in 2012), yet ironically failed to implement this technology in their own manufacturing until after competitors like TSMC had already gained an advantage with it. This pattern eerily echoes Kodak's fate, where they invented digital photography but failed to pivot their business model quickly enough, ultimately watching others capitalize on the very innovation they helped create.
Nobody on Earth knows the intricates of Silicon Valley better than Richard Rumelt. I am looking forward to reading this Paper. A humble suggestion : Why not a comparative study on the´Magnates ´ ? I.e : Microsoft, Apple, Google, Nvidia, Amazon, TEsla, SpaceX. Sincerely Yours, Frederic D.