N
Fame Burst

How are cpus actually made

Author

Mason Cooper

Updated on March 29, 2026

How a CPU Microprocessor Is Made

This is a quick tutorial to help you understand how microprocessors and other integrated circuits are made, or ‘fabbed.’ Starting from a chunk of silicon and resulting in a device with millions and millions of transistors that now run almost everything in your life.

Making the Wafer

CPUs are made mostly of an element called silicon. Silicon is rather common in earths crust and is a semiconductor. This means that depending on what materials you add to it, it can conduct when a voltage is applied to it. It is the ‘switch that makes a CPU work. Modern CPUs literally contain millions transistors.

Raw silicon

The first stage in making a CPU is to make the wafers that they are built on. This process begins with the melting of polysilicon, together with minute amounts of electrically active elements such as arsenic, boron, phosphorous or antimony in a quartz crucible (a container that won’t melt at high temperatures).

How are cpus actually made
Silicon is a simple FCC Lattice (Face Centered Cubic)

Once the melt has reached the desired temperature, we lower a silicon seed crystal, or “seed” into the melt. The melt is slowly cooled to the required temperature, and crystal growth begins around the seed. As the growth continues, the seed is slowly extracted or “pulled” from the melt. As the the ingot is pulled it is slowly rotated. This is done to help normalize any temperature variations in the melt. The temperature of the melt and the speed of extraction govern the diameter of the ingot, and the concentration of an electrically active element in the melt governs the electrical properties of the silicon wafers to be made from the ingot. This is a complex, proprietary process requiring many control features on the crystal-growing equipment. The crystals naturally tend to a circular shape due to the crystal structure itself, and the surface tension on the liquid.

How are cpus actually made
6″ ingot is drawn out of the melt. This process can take several hours.

This results in a large ingot that looks like this:
How are cpus actually made
To be useful the ingot must be very pure. The ends and edges are the areas of highest impurities (this is due to annealing) so the ends are cut off and the edges are ground down so the ingot is the proper diameter.

Next the wafers are cut from the ingot. They are usually cut 1-2mm thick with a fast wire saw. The edges of these wafers are then rounded to prevent chipping and edge cracks. The wafers are then ground and polished both chemically and mechanically to produce a very flat, mirror like surface.

Wafers may then be heated to help remove any defects. (annealing)
How are cpus actually made

The wafers are then inspected with a laser to find any surface defects.

A single crystal layer (epilayer) may then be added to the surface of the wafer.
The wafer is now ready for etching.

While the way CPUs work may seem like magic, it’s the result of decades of clever engineering. As transistors—the building blocks of any microchip—shrink to microscopic scales, the way they are produced grows ever more complicated.

Photolithography

Transistors are now so impossibly small that manufacturers can’t build them using normal methods. While precision lathes and even 3D printers can make incredibly intricate creations, they usually top out at micrometer levels of precision (that’s about one thirty-thousandth of an inch) and aren’t suitable for the nanometer scales at which today’s chips are built.

Photolithography solves this issue by removing the need to move complicated machinery around very precisely. Instead, it uses light to etch an image onto the chip—like a vintage overhead projector you might find in classrooms, but in reverse, scaling the stencil down to the desired precision.

The image is projected onto a silicon wafer, which is machined to very high precision in controlled laboratories, as any single speck of dust on the wafer could mean losing out on thousands of dollars. The wafer is coated with a material called a photoresist, which responds to the light and is washed away, leaving an etching of the CPU that can be filled in with copper or doped to form transistors. This process is then repeated many times, building up the CPU much like a 3D printer would build up layers of plastic.

The Issues With Nano-Scale Photolithography

How are cpus actually made

It doesn’t matter if you can make the transistors smaller if they don’t actually work, and nano-scale tech runs into a lot of issues with physics. Transistors are supposed to stop the flow of electricity when they’re off, but they’re becoming so small that electrons can flow right through them. This is called quantum tunneling and is a massive problem for silicon engineers.

Defects are another problem. Even photolithography has a cap on its precision. It’s analogous to a blurry image from the projector; it’s not quite as clear when blown up or shrunk down. Currently, foundries are trying to mitigate this effect by using “extreme” ultraviolet light, a much higher wavelength than humans can perceive, using lasers in a vacuum chamber. But the problem will persist as the size gets smaller.

Defects can sometimes be mitigated with a process called binning—if the defect hits a CPU core, that core is disabled, and the chip is sold as a lower end part. In fact, most lineups of CPUs are manufactured using the same blueprint, but have cores disabled and sold at a lower price. If the defect hits the cache or another essential component, that chip may have to be thrown out, resulting in a lower yield and more expensive prices. Newer process nodes, like 7nm and 10nm, will have higher defect rates and will be more expensive as a result.

Packaging it Up

Packaging the CPU for consumer use is more than just putting it in a box with some styrofoam. When a CPU is finished, it’s still useless unless it can connect to the rest of the system. The “packaging” process refers to the method where the delicate silicon die is attached to the PCB most people think of as the “CPU.”

This process requires a lot of precision, but not as much as the previous steps. The CPU die is mounted to a silicon board, and electrical connections are run to all of the pins that make contact with the motherboard. Modern CPUs can have thousands of pins, with the high-end AMD Threadripper having 4094 of them.

Since the CPU produces a lot of heat, and should also be protected from the front, an “integrated heat spreader” is mounted to the top. This makes contact with the die and transfers heat to a cooler that is mounted on top. For some enthusiasts, the thermal paste used to make this connection isn’t good enough, which results in people delidding their processors to apply a more premium solution.

Once it’s all put together, it can be packaged into actual boxes, ready to hit the shelves and be slotted into your future computer. With how complex the manufacturing is, it’s a wonder most CPUs are only a couple hundred bucks.

If you’re curious to learn even more technical information about how CPUs are made, check out Wikichip’s explanations of lithography processes and microarchitectures.

How are cpus actually made

How are cpus actually madephotographs / Shutterstock

The operation of the processors may seem magical, but it is the result of decades of intelligent engineering. As the transistors – constituent elements of any electronic chip – are reduced to microscopic scales, their mode of production becomes more and more complicated.

Photolithography

How are cpus actually madeJ. Robert Williams / Shutterstock

Transistors are now so small that manufacturers can not build them with the usual methods. While the precision turns and even 3D printers Creations can be incredibly complex. They typically reach a micrometer accuracy level (about half a thousandth of an inch) and are not suitable for the nanoscale scales at which chips are manufactured today.

Photolithography solves this problem by eliminating the need to move complicated machines very precisely. Instead, it uses light to burn an image on the chip – like a vintage overhead projector that you might find in classrooms, but conversely, reducing the stencil to the desired precision.

The image is projected onto a silicon wafer, which is machined with very high precision in controlled laboratories, because any trace of dust on the wafer could cause you to lose thousands of dollars. The wafer is covered with a material called photosensitive resin, which reacts with light and is washed away, leaving an etching of the CPU that can be filled with copper or copper. dope to form transistors. This process is then repeated many times, building the processor a bit like a 3D printer accumulate layers of plastic.

The problems of photolithography at the nanoscale

How are cpus actually made

Regardless of whether you can reduce the size of the transistors if they do not work, the nanoscale technology faces many problems with physics. Transistors are supposed to stop the electric current when they are off, but they become so small that electrons can pass through them. This is called quantum tunneling and is a massive problem for silicon engineers.

Defects are another problem. Even photolithography has a limit of precision. This is analogous to a blurry image of the projector; it’s not so obvious when you blow them up or reduce them. Currently, foundries are trying to mitigate this effect by using Ultraviolet “extreme”, a much longer wavelength than humans can perceive, using lasers in a vacuum chamber. But the problem will persist as the size decreases.

Faults can sometimes be mitigated by a process called “binning”. If the fault affects a processor core, the processor core is disabled and the chip is sold as the lower part. In fact, most processor alignments are made using the same model, but the cores are disabled and sold at a lower price. If the fault affects the cache or other essential component, it is possible that this chip is ejected, resulting in a lower yield and higher prices. New process nodes, like 7nm and 10nm, will have higher default rates and will therefore be more expensive.

L & # 39; packaging

How are cpus actually madeMchlSkhrv / Shutterstock

Packing the processor for mainstream use is not limited to putting it in a box with styrofoam. When a processor is finished, it is still useless if it can not connect to the rest of the system. The “packaging” process refers to the method where the silicon chip is attached to the circuit board that most people regard as the “CPU”.

This process requires a lot of precision, but not as much as the previous steps. The processor chip is mounted on a silicon board and the electrical connections are connected to all the pins that come into contact with the motherboard. Modern processors can have thousands of pins, the upscale AMD Threadripper by having 4094.

Since the processor produces a lot of heat and must also be protected from the front, an “integrated heat spreader” is mounted upward. This makes contact with the matrix and transfers heat to a top-mounted cooler. For some enthusiasts, the thermal paste used to establish this connection is not sufficient, which leads to delidding their processors apply a more premium solution.

Once assembled, it can be packed in real boxes, ready to be put on the shelves and placed in your future computer. With the complexity of manufacturing, most processors cost only a few hundred dollars.

If you want to know more about the process of manufacturing the processors, see the explanations of Wikichip on lithography processes and microarchitectures.

Optimizing IT for usability, performance and reliability since 1997

How are cpus actually madefotografos/Shutterstock

While the way CPUs work may seem like magic, it’s the result of decades of clever engineering. As transistors—the building blocks of any microchip—shrink to microscopic scales, the way they are produced grows ever more complicated.

Photolithography

How are cpus actually madeJ. Robert Williams / Shutterstock

Transistors are now so impossibly small that manufacturers can’t build them using normal methods. While precision lathes and even 3D printers can make incredibly intricate creations, they usually top out at micrometer levels of precision (that’s about one thirty-thousandth of an inch) and aren’t suitable for the nanometer scales at which today’s chips are built.

Photolithography solves this issue by removing the need to move complicated machinery around very precisely. Instead, it uses light to etch an image onto the chip—like a vintage overhead projector you might find in classrooms, but in reverse, scaling the stencil down to the desired precision.

The image is projected onto a silicon wafer, which is machined to very high precision in controlled laboratories, as any single speck of dust on the wafer could mean losing out on thousands of dollars. The wafer is coated with a material called a photoresist, which responds to the light and is washed away, leaving an etching of the CPU that can be filled in with copper or doped to form transistors. This process is then repeated many times, building up the CPU much like a 3D printer would build up layers of plastic.

The Issues With Nano-Scale Photolithography

How are cpus actually made

It doesn’t matter if you can make the transistors smaller if they don’t actually work, and nano-scale tech runs into a lot of issues with physics. Transistors are supposed to stop the flow of electricity when they’re off, but they’re becoming so small that electrons can flow right through them. This is called quantum tunneling and is a massive problem for silicon engineers.

Defects are another problem. Even photolithography has a cap on its precision. It’s analogous to a blurry image from the projector; it’s not quite as clear when blown up or shrunk down. Currently, foundries are trying to mitigate this effect by using “extreme” ultraviolet light, a much higher wavelength than humans can perceive, using lasers in a vacuum chamber. But the problem will persist as the size gets smaller.

Defects can sometimes be mitigated with a process called binning—if the defect hits a CPU core, that core is disabled, and the chip is sold as a lower end part. In fact, most lineups of CPUs are manufactured using the same blueprint, but have cores disabled and sold at a lower price. If the defect hits the cache or another essential component, that chip may have to be thrown out, resulting in a lower yield and more expensive prices. Newer process nodes, like 7nm and 10nm, will have higher defect rates and will be more expensive as a result.

Packaging it Up

How are cpus actually madeMchlSkhrv / Shutterstock

How are cpus actually made

­The computer you are using to read this page uses a microprocessor to do its work. The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Sparc or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.

A microprocessor — also known as a CPU or central processing unit — is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful — all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.

­ If you have ever wondered what the microprocessor in your computer is doing, or if you have ever wondered about the differences between types of microprocessors, then read on. In this article, you will learn how fairly simple digital logic techniques allow a computer to do its job, whether its playing a game or spell checking a document!

How are cpus actually made

The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip, introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium II to the Pentium III to the Pentium 4. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The Pentium 4 can execute any piece of code that ran on the original 8088, but it does it about 5,000 times faster!

Since 2004, Intel has introduced microprocessors with multiple cores and millions more transistors. But even these microprocessors follow the same general rules as earlier chips.

How are cpus actually made

Additional information about the table on this page:

  • The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
  • Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
  • Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
  • Clock speed is the maximum rate that the chip can be clocked at. Clock speed will make more sense in the next section.
  • Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases, the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
  • MIPS stands for “millions of instructions per second” and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.

From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about one instruction per 15 clock cycles). Modern processors can often execute at a rate of two instructions per clock cycle. That improvement is directly related to the number of transistors on the chip and will make more sense in the next section.

A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain tens of millions of transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square.

Is it worth spending big bucks on a highfalutin CPU if all you’re doing is watching “Gangnam Style” on YouTube?

In the old days, choosing a computer was easy: you bought the one with the fastest processor you could afford. And you knew which processor was fastest (more or less) by its numerical clock-speed rating.

These days it’s a lot trickier. Only hard-core techies (and those with the patience to search in Google) know the difference between, say, an AMD A4-3305M and an Intel Core i3-2350M.

And even then, does it really matter? There’s a strong argument to be made that processor performance, even in low-cost, entry-level PCs, has reached a level that’s good enough for most users — folks who use their machines mostly for word processing, e-mail, and Web stuff.

Of course, more and more users are turning to tablets for those activities, but that’s another topic entirely.

Obviously some people need all the processing power they can get — though usually that’s for graphics-intensive tasks like gaming, video editing, and Photoshop. And that’s where you need a desktop with a decent video card or a laptop with decent discrete graphics. Dual-core versus quad-core versus Core-this-or-that is less of a factor.

There’s another factor that can contribute to overall PC performance, one that can actually trump the hardware you have. See, I own a fairly state-of-the-art Core i7 machine with 8GB of RAM, a 750GB hard drive, discrete graphics, and the like. It’s barely two years old. And you know what? It’s a slowpoke compared with the cheap dual-core laptop I bought less than a year ago. It takes several minutes to boot and often just seems to bog down for no reason.

Know why? Windows. Even a system with top-tier hardware can turn to molasses when Windows’ arteries get clogged, which in my experience tends to happen 12 to 18 months in. Granted, I install and uninstall a lot of software on my primary system, and the aforementioned cheapie laptop has little more than Mozilla Thunderbird and Kingsoft Office.

In other words, it runs lean, which is how it’s able to stay speedy even when supposedly faster hardware bogs down.

But that just proves my point: even a “slow” PC can get the job done, especially if you stick to basic computing tasks.

What are your thoughts on this? Is the processor still a key consideration when you buy a new PC? Or do you agree that it’s not a big deal anymore?

Join the Community

How are cpus actually made

The Central Processing Unit (CPU), or processor, is a component that acts as the brain of a computer system. Instead of actually thinking, however, it moves data around the system in ways defined by computer programmers. A CPU essentially performs three basic functions. It accepts input, processes data, and provides output. These are critically important to the operation of any computer system.

How are cpus actually madeA dual core CPU mounted to a motherboard.

Input is the process by which external data is entered into a computer. It is mainly provided by common input devices, such as a keyboard, mouse, scanner, or modem. Once the computer analyzes the input, that data is then processed and converted into output.

Output is the end result of the processed data input into the computer system. It refers to a process by which the CPU sends data to installed devices, such as a monitor, printer, or even a running computer program. The output data can either be stored temporarily or permanently, meaning the computer must have a way to contain this data while processing is being performed. This is where memory comes in.

A computer stores data in memory, and retrieves the data it requires from either Read-Only Memory (ROM) or Random Access Memory (RAM). ROM is permanent memory that retains data even when the system is turned off. RAM is temporary memory and, therefore, any data stored there will be deleted when the system is turned off. The CPU uses RAM to store and retrieve data on an as-needed basis. For example, the instructions needed to launch a program would be stored in and retrieved from RAM.

The cache also plays an important role in the functioning of a CPU. A cache is small amount of high-speed memory that holds data. Some processors have a cache that varies in static RAM (SRAM) capacity. SRAM is considerably faster than Dynamic RAM (DRAM), which is designated for the main memory in the computer. The overall purpose of the cache is to increase the speed at which data is processed.

Data requests made by the CPU are handled by a cache controller. This can either be built into the motherboard or the processor itself. Being that cache is an internal component, it can be accessed directly and, therefore, maintain the speed of the processor. Without this component, the computer would run dramatically slower, as the processor would be forced to wait for data to be sent from the main system memory.

The CPU is not only an important element, but a crucial one. Without it, the system would not be able to function at all. This critical component also determines the overall performance any given computer system will provide.

How are cpus actually madeIn most cases, a CPU must be connected to a motherboard in order to work properly.

Latest in Gear

How are cpus actually made

The best outdoor speakers

How are cpus actually made

The best universal remote control

How are cpus actually made

The best USB Wi-Fi adapters

How are cpus actually made

How to buy a laptop for your kid (or revive an old one)

How are cpus actually made

How are cpus actually made

Sponsored Links

Tech’s biggest players have fully embraced the AI revolution. Apple, Qualcomm and Huawei have made mobile chipsets that are designed to better tackle machine-learning tasks, each with a slightly different approach. Huawei launched its Kirin 970 at IFA this year, calling it the first chipset with a dedicated neural processing unit (NPU). Then, Apple unveiled the A11 Bionic chip, which powers the iPhone 8, 8 Plus and X. The A11 Bionic features a neural engine that the company says is “purpose-built for machine-learning,” among other things.

Last week, Qualcomm announced the Snapdragon 845, which sends AI tasks to the most suitable cores. There’s not a lot of difference between the three company’s approaches — it ultimately boils down to the level of access each company offers to developers and how much power each setup consumes.

Before we get into that though, let’s figure out if an AI chip is really all that different from existing CPUs. A term you’ll hear a lot in the industry with reference to AI lately is “heterogeneous computing.” It refers to systems that use multiple types of processors, each with specialized functions, to gain performance or save energy. The idea isn’t new — plenty of existing chipsets use it — the three new offerings in question just employ the concept to varying degrees.

The Snapdragon 845.

Smartphone CPUs from the last three years or so have used ARM’s big.LITTLE architecture, which pairs relatively slower, energy-saving cores with faster, power-draining ones. The main goal is to use as little power as possible, to get better battery life. Some of the first phones using such architecture include the Samsung Galaxy S4 with the company’s own Exynos 5 chip, as well as Huawei’s Mate 8 and Honor 6.

This year’s “AI chips” take this concept a step further by either adding a dedicated component to execute machine-learning tasks or, in the case of the Snapdragon 845, using other low-power cores to do so. For instance, the Snapdragon 845 can tap its digital signal processor (DSP) to tackle long-running tasks that require a lot of repetitive math, like listening out for a hotword. Activities like image-recognition, on the other hand, are better managed by the GPU, Qualcomm Director of Product Management Gary Brotman told Engadget. Brotman heads up AI and machine-learning for the Snapdragon platform.

Meanwhile, Apple’s A11 Bionic uses a neural engine in its GPU to speed up Face ID, Animoji and some third-party apps. That means when you fire up those processes on your iPhone X, the A11 turns on the neural engine to carry out the calculations needed to either verify who you are or map your facial expressions onto talking poop.

On the Kirin 970, the NPU takes over tasks like scanning and translating words in pictures taken with Microsoft’s Translator, which is the only third-party app so far to have been optimized for this chipset. Huawei said its “HiAI” heterogeneous computing structure maximizes the performance of most of the components on its chipset, so it may be assigning AI tasks to more than just the NPU.

Differences aside, this new architecture means that machine-learning computations, which used to be processed in the cloud, can now be carried out more efficiently on a device. By using parts other than the CPU to run AI tasks, your phone can do more things simultaneously, so you are less likely to encounter lag when waiting for a translation or finding a picture of your dog.

Plus, running these processes on your phone instead of sending them to the cloud is also better for your privacy, because you reduce the potential opportunities for hackers to get at your data.

The A11 Bionic’s two “performance” cores and four “efficiency” cores.

Another big advantage of these AI chips is energy savings. Power is a precious resource that needs to be allocated judiciously because some of these actions can be repeated all day. The GPU tends to suck more juice, so if it’s something the more energy efficient DSP can perform with similar results, then it’s better to tap the latter.

To be clear, it’s not the chipsets themselves that decide which cores to use when executing certain tasks. “Today, it’s up to developers or OEMs where they want to run it,” Brotman said. Programmers can use supported libraries like Google’s TensorFlow (or more specifically its Lite mobile version) to dictate on which cores to run their models. Qualcomm, Huawei and Apple all work with the most popular options like TensorFlow Lite and Facebook’s Caffe2. Qualcomm also supports the newer Open Neural Networks Exchange (ONNX), while Apple adds compatibility for even more machine-learning models via its Core ML framework.

So far, none of these chips have delivered very noticeable real-world benefits. Chip makers will tout their own test results and benchmarks, which are ultimately meaningless until AI processes become a more significant part of our daily lives. We’re in the early stages of on-device machine learning being implemented, and developers who have made use of the new hardware are few and far between.

Right now, though, it’s clear that the race is on to make carrying out machine learning-related tasks on your device much faster and more power-efficient. We’ll just have to wait awhile longer to see the real benefits of this pivot to AI.

Images: Huawei (Kirin AI processor), Apple (A11 processor cores).

I have an AMD CPU with 8 cores and 2 threads per core. Linux (correctly) shows this as 16 “cpus”. However, sysfs actually shows 32 “possible” cpus, with 16 of them not present and offline:

To be clear, there’s nothing wrong here; there are indeed 16 logical CPUs present and online. What I’m not clear on is why Linux detects an addition 16 logical CPUs that are not present but possible.

I think these are the relevant kernel docs: But I don’t see any indication of how the number of possible CPUs is chosen. (Note that it’s much lower than the kernel_max number of CPUs, which is 8191 on my system.)

(A little additional background: I have some code that needs to parse these values. Doing the right thing seems straightforward, but I’d like to have a clear docstring explaining why the number of possible CPUs can exceed the number of present CPUs on an ordinary desktop computer.)

1 Answer 1

A CPU is “possible” if there’s room for it in the kernel memory. The number of possible CPU is the maximum number of CPU that can be brought online, including ones that are hotplugged after boot.

The documentation of this part of sysfs is in How CPU topology info is exported via sysfs:

possible: CPUs that have been allocated resources and can be brought online if they are present. [cpu_possible_mask]

But the more detailed documentation of cpu_possible_mask is in CPU hotplug in the Kernel:

Bitmap of possible CPUs that can ever be available in the system. This is used to allocate some boot time memory for per_cpu variables that aren’t designed to grow/shrink as CPUs are made available or removed. Once set during boot time discovery phase, the map is static, i.e no bits are added or removed anytime. Trimming it accurately for your system needs upfront can save some boot time memory.

This parameter can be configured through command line options. In the likely case that your hardware doesn’t support plugging in another CPU without rebooting and you don’t intend to hibernate your system and make it wake up with more CPUs, you can save a small amount of kernel memory by passing possible_cpus=16 on the kernel command line. On a typical PC or server, the amount is probably too small for it to be worth the effort.

In the absence of command line options, I think you need to read the source to figure out what’s going on. If the kernel is compiled without CPU hotplug support ( CONFIG_HOTPLUG_CPU ), it just looks how many CPU are present at boot time. If the kernel has CPU hotplug support, according to a comment for prefill_possible_map in the source code:

  • If the BIOS specified disabled CPUs in ACPI/mptables use that.
  • The user can overwrite it with possible_cpus=NUM
  • Otherwise don’t reserve additional CPUs.

I haven’t verified that this is what the code does.

Note that the principle of what “possible CPUs” means applies to all architectures, but the ways to determine the number of CPUs are architecture-specific. In my answer I assume x86.