Z-ZERO
IN THE NEWS

THE PRINTED
CIRCUIT DESIGNER'S GUIDE TO...
Stackups: The Design Within the Design
FREE DOWNLOAD
From material selection and understanding laminate datasheets, to impedance planning, glass weave skew and rigid-flex materials, topic expert Bill Hargin has written a unique book on PCB stackups.
According to the author, “The difference between a high-speed PCB design that can be built, and a design that should be built, depends upon the backbone of the design itself: the stackup. The stackup touches every single high-speed signal and yet has had surprisingly little written about it.”
While perhaps not the final word on the subject of stackups, this book is a good place to kick off a broader discussion of stackup planning and material selection in an effort to comprehend what Hargin calls, “the design within the design.”


Resistive Loss is Only Skin Deep
Mitigating skin effect’s impact on high-speed signals.
WRITTEN BY: BILL HARGIN
While writing this article, I’ve been thinking of places that skin appears in nature and pop culture. When I started writing, I flipped on Skinwalker Ranch on the History Channel for the first time as background noise, and they were talking about magnetic fields, current flow, and Tesla coils.
Skin is said to be the largest organ in the human body. It has multiple layers and some amazing properties. Galvanic skin response, used in lie detectors, measures changes in skin conductance caused by sweat-gland activity. I suppose you could call that a “skin effect” too.
It’s perfectly reasonable for engineers and PCB designers to ask, “Where should I focus my attention?” insofar as loss is concerned. In Signal and Power Integrity – Simplified,1 Dr. Eric Bogatin points out five ways energy can be lost to the receiver while the signal is propagating down a transmission line:
- Radiative loss
- Coupling to adjacent traces
- Impedance mismatches and glass-weave skew (the latter being my addition)
- Conductor loss
- Dielectric loss.
Each of these mechanisms reduces or affects the received signal, but they have significantly different causes and remedies. Plenty of articles over the years have discussed managing impedance and crosstalk, including ones I’ve written. I’ve also written about managing loss through dielectric-material selection and copper roughness, one of the two components of conductor loss. The other contributor to conductor loss is commonly known as skin effect.
From DC to about 100MHz, the bulk resistivity and, by extension, the series resistance of copper transmission lines are constant, and current flow is uniform across the entire cross-section. AC currents, on the other hand, take the path of lowest impedance at higher frequencies – traveling in a thin shell on the conductor’s surface toward the outside of the conductor.The result is an effective reduction in the trace cross-section. At high frequencies, the cross-section through which current will flow in a copper conductor is referred to as the skin depth, δ:

where
δ = skin depth, in µm
f = frequency, in GHz.
In copper, at 1GHz, the current in a transmission-line cross-section, for example, is concentrated in a layer about 2.1µm thick, on the perimeter or “skin” of the trace, shown graphically in FIGURE 1. At 10GHz, current flow concentrates in a layer of 0.66µm thick. Note: This relationship has nothing to do with trace width or any other parameter but frequency.
Signal resistance depends on the actual cross-section the current is flowing through. So, at higher frequencies, like the 10GHz frequency point where the skin depth is 0.66µm in Figure 1, resistance will increase with frequency. It’s important to note the only thing that’s changing to cause this increase in resistance is the cross-section through which the current is flowing.
FIGURE 2 illustrates the skin effect phenomenon for a 0.5-oz. symmetrical stripline trace at various frequencies. The top cross-section shows that at 70MHz current will flow through the entire cross-sectional area, as the skin depth still reaches the midpoint of the trace in the vertical. Skin depth, δ, is 7.9µm, half the thickness of half-ounce copper after processing. Resistance will be unaffected for half-ounce copper at this frequency, and currents will follow the path of least resistance.
The second image shows the same trace cross-section at 1GHz. Following the graph in Figure 1, the skin depth, δ, is 2.1µm. This is shown by the orange “skin” around the perimeter. At 1GHz, the blue area represents the remaining area where there is no current flow. Note the “current crowding” of high-frequency signal components on the top and bottom of the trace cross-section. Above the frequency at which skin effect kicks in – the “skin-effect onset frequency,” as some call it – signals follow the path of least inductance. (An entire article could be written on this subject alone.)
The third cross-section shows the skin depth at 10GHz for the same half-ounce trace. Note δ is reduced to 0.66µm, as is seen in the plot in Figure 1.



It’s instructive to perform the same exercise for 1-oz. copper. As FIGURE 3 shows, 1-oz. copper, with a post-processing thickness of 30.5µm, utilizes the entire cross-section at a skin depth of 15.7µm up to 5MHz. Above this skin-effect onset frequency, resistance will increase, and this is shown in the other two cross-sections at 1 and 10GHz.
A few things are worth noting now that we’ve looked at both 0.5 and 1-oz. copper. The first thing to consider is the skin depth is the same for both copper weights. That means that for the same trace width, the current will have roughly the same cross-sectional area to flow through. What’s different is how the skin depth compares to the remaining cross-section due to its size, but above the skin-effect onset frequency, we don’t really care about the blue regions in Figures 2 and 3.

where
Lossresistive= is resistive loss (attenuation),
Length = trace length in inches
w = trace width in mils
Z0 = the single-ended impedance (ohms)
f = frequency (GHz).
Note that trace length, frequency and impedance are the biggest factors in this equation. Frequency and length increase loss, as you would expect, and impedance reduces it. Trace width pulls resistive loss down too, but both trace width and trace thickness in the vertical are factors in the denominator of the impedance relationship, reducing trace width’s impact on resistive loss. Thickness, which is a small value for signal layers whether 0.5 or 1.0-oz. copper is used, is a small factor compared to the others. As Figures 2 and 3 show, currents and electromagnetic fields crowd toward adjacent reference planes in the vertical, whether 0.5 or 1.0-oz. copper is used.
Let’s plug in some numbers for a 36″ backplane as an example. At 10GHz, a 50Ω stripline with a width of 4.9 mils will have an attenuation from the conductor of Lossresistive equaling approximately (36)(10)1/2/(4.9 x 50) = 0.46dB/in. Across the 36″ run length, it would be 16.7dB from resistive loss.
Reducing loss by increasing trace width is a commonly considered option for reducing resistive loss. Several years ago, I sat in on Lee Ritchey’s “Getting to 32GHz” workshop, and he had a few things to say on this subject that bear repeating.
Ritchey2 mentioned that increasing trace width reduces impedance. Fair enough. He went on to say that to maintain the 50Ω single-ended impedance required for each line in a differential pair, the dielectric thickness needs to increase, increasing the overall thickness of the PCB, along with the cost due to the additional dielectric material. He pointed out dielectric loss dominates the loss problem for common laminates, and selecting a lower-loss dielectric provides more leverage than using wider traces to reduce skin-effect losses.
It’s pretty easy to show this with a good 2-D field solver, which we’ll do next, reusing our 36″, asymmetrical stripline backplane example above. For a 4.9-mil line width and 0.5-oz. copper, the insertion loss due to the skin effect (aka: resistive loss) is 0.35dB/in., as shown in FIGURE 4. While the results are in the same ballpark, the simulated resistive loss is a good bit lower than the calculated value above (0.46dB/in.). I have more trust for a field solver over the equation-based approximation, partially because the field solver represents a detailed model of Maxwell’s equations, but also due to its flexibility. A good 2-D field solver allows inclusion of dielectric loss and copper roughness in the same simulation. Adjustments between microstrip and both symmetrical and asymmetrical stripline configurations are automated in field-solver software as well.


Some may be confused into thinking an interrelationship exists between different types of loss – for example, between resistive loss and dielectric loss. Dielectric loss is tied to a dielectric material’s loss tangent, which is represented by tan(δ). No connection exists between the δ in tan(δ) and the δ in skin depth. And, if you look at the resistive loss equation above, there’s no connection between Lossresistive and loss tangent or dissipation factor (Df).
Scanning the resistive loss equation cited above, we can see factors that relate to everything surrounding the trace, including Dk, which ties to Z0, but not copper roughness or Df, as noted above. Contributions from each of these can be calculated or simulated separately and then summed together, as we’ll do in the example below.
Let’s say we’re starting from scratch on the backplane interconnect outlined above. We’ll assume all we know is it needs to be 50Ω, single-ended, 36″ in length, and we want to keep total loss at 20dB or lower (0.55dB/in.) to prevent excessive power consumption from transmitter pre-emphasis and the receiver’s equalization circuitry. Ignoring vias for this particular example, we’ll start with a symmetrical stripline with an initial dielectric height of 4 mils, a Dk of 3.6 and a Df of 0.005 at 10GHz. Ideally, we’d like the Dk to be even lower because it helps keep the board thickness and cost down and helps with loss. (Dk is in the denominator of the Z0 relationship, and Z0 is in the denominator of the loss relationship. As a result, there’s a direct connection between Dk and loss.) We’ll also say we would prefer 0.5-oz. copper because it’s less expensive, but we’re willing to consider 1-oz. copper. Copper roughness will start at Rz=5.0µm. (Note: Many equations regarding copper roughness use RMS roughness, which is a hard number to obtain from laminate and PCB fabricators, so I tend to use Rz, the peak-to-peak measurement, which is a rather easy number to obtain with a profilometer.)
FIGURE 6 shows the result, but in our initial swing at hitting 0.55dB/in. we are pretty far off. The copper roughness contribution alone is consuming most of our interconnect loss budget, and at 0.54dB/in. it’s more than twice the dielectric loss. We’ll start here first.

FIGURE 6. Simulation of insertion loss box, showing the comparative contributions of dielectric loss, skin effect, and copper roughness. (Source: Z-zero Z-solver software.)

FIGURE 7. Simulation of insertion loss box, showing the comparative contributions of dielectric loss, skin effect, and copper roughness, after switching to Rz=1μm copper. (Source: Z-zero Z-solver software.)

FIGURE 8. Widening the trace by 1 mil reduced resistive loss by 0.06dB/in., and we had to move to a thicker dielectric, 4.5 mils, to maintain our impedance target. (Simulated with Z-zero Z-solver software.)
A good backplane fabricator can build PCBs with a roughness of 1.5µm on the “process” or prepreg side. Most hardware designers working on long, high-frequency backplanes are aware smooth copper comes at a price premium, so we’ll try Rz=2µm and Rz=1µm, respectively. An Rz roughness of 2µm brings us to a copper roughness loss of 0.11dB/in. and a total loss of 0.77dB/in. This is much better, of course, but we still have a good bit of loss to trim from our design, so it’s worth trying Rz=1µm copper. This brings us to 0.07dB/in. for copper roughness and a total loss of 0.73dB/in., as shown in FIGURE 7. Note the resistive loss from the skin effect didn’t change at all. As noted above, there’s no interrelationship between these two parameters.
Now we need to look at where we’re going to get the last 0.18dB/in. The resistive loss or skin effect looks like the biggest remaining contributor, so against my own best judgment from experience, I’ll go there next in this example. To hit 50Ω with this example required a trace width of 3.77 mils. That’s doable, but a bit on the aggressive side from a manufacturing standpoint and possibly from a resistive loss standpoint. Let’s bump that up by a mil and see if we can find a laminate construction with a lower Dk to help us hit our impedance target. A good number of materials have Dks in the 3.3 range with Df values at or below 0.005. FIGURE 8 shows that widening the trace by 1 mil only reduced resistive loss by 0.06dB/in., and we had to move to a thicker dielectric, 4.5 mils, to maintain our impedance target. As Ritchey mentions, this seems a less-than-optimal tradeoff.

This process may seem tedious, but no one said designing 36″ backplanes is easy. Nevertheless, spending a little time with a handy software tool can give you a feel for the tradeoffs. One advantage is you can try things almost as fast as you can think of them.
We’ve seen the physics of skin effect make it hard to affect. But before we rule out changing copper weight or trace width completely, I thought I’d pass along a tip I’ve learned through many hours of experimentation with the tradeoffs. As fine-tuning knobs for impedance and resistive loss, these two parameters are great, especially when working with a sharp pencil.
If you can make material and routing decisions like this early in the design process, you’ll avoid prototype surprises down the road or paying more than you need to for laminate systems that are overkill for a design. Making these choices early also allows you to avoid initial laminate lead times that can delay prototypes or early production. Because of prepreg shelf lives, fabricators only carry the laminates they know they can use within six months or less, so a just-in-time approach is usually followed. As with many other aspects of life, planning gives more options and fewer surprises. You can feed that expensive signal-integrity solution Dk and Df data from the actual laminate system you’re planning to use. Moreover, it may allow you to hold to NPI (new product introduction) schedules more consistently, while relieving some of the pressure you’ve been putting on PCB suppliers to make up for poor planning. Everyone wins!
I appreciate hearing from readers. Drop me an email if you read this far and found this article helpful!
- Eric Bogatin, Signal and Power Integrity – Simplified, Pearson Education, 2010.
- Lee Ritchey, “Getting to 32 Gb/s,” DesignCon Proceedings, 2018.
- Brian Young, Digital Signal Integrity: Modeling and Simulation with Interconnects and Packages, Prentice Hall, 2000.

Bill Hargin of Z-zero: “It’s like the wild, wild West out there right now.”

Addressing the Issue of Mischaracterized Materials
Two parameters—the dielectric constant (Dk) and the loss tangent (Df)—are particularly important and often mischaracterized in the tables published by material manufacturers, they added. “We’ve seen some materials that match up on the dielectric constant, but miss on the loss tangent,” DeGroot said. “And we’ve seen other materials that match up on the loss tangent, but miss on the dielectric constant. Not all of the data is bad, but you need to know that you can’t just plug in the numbers from the tables.”
At their session, Hargin and DeGroot will discuss the need for engineers to make their own measurements. Their two companies are teaming up on a methodology that would enable engineers to do that in a simple and accessible way. “We’re not inventing a new method,” DeGroot said. “We’re making a modification to an existing IPC test method. And we’re making it in such a way that people can put a slip of material in a clamp, hit a button, and be assured they are getting the right answer.”
In essence, Hargin and DeGroot are aiming to clear up some of the mystery that surrounds materials testing. Today, they said, there are too many methods for characterizing the dielectric constant and loss tangent of a material. As a result, engineers are often hesitant to make the measurements themselves because they’re not sure which method to use. “It’s like the wild, wild West out there right now,” Hargin told us.
Their goal is to make engineers aware of the potential problems, as well as the solutions. “The thing that every engineer needs to be ‘woked’ to is that their numbers aren’t always matching,” DeGroot said. “You need better data. And how do you get that data? Do it yourself.”
Senior technical editor Chuck Murray has been writing about technology for 34 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.

Bill Hargin of Z-zero: “A lot of people think of impedance and stackups as kind of a monolithic concept, but they’re pretty nuanced, in fact.”

OEMs Must Own the Stackup
we’ve discussed in the past. He just had a patient leave his office. He got my file folder out, and I can see him thinking right in front of me, looking through my folder. I whip out my laptop and flip it open, and I’ve got graphs and Excel tables. I’ve mapped out
my history of medications and the resulting symptoms in Excel, and I’m showing it to him. I have better data than he does.
they analyze everything. Well, engineers need to do that with stackups and materials. Don’t just kick the can down the road and think your fabricators are going to do everything perfectly. And I’ve had engineers say to me, “Why do I give the same set of requirements to three different fabricators, and I even specify the materials, and I get three different stackups back?” In response, I’ve told them that they need to take more control of the process and leave less margin for the individual fabricators to figure
things out separately.
the stackup design process. And different fabricators will send stackups back to you in different formats, too. One will send you a PDF, the other will send you an Excel spreadsheet, and if you have a third fabricator, they’re going to send you a JPEG.
took ownership of the stackup design process.
One fabricator says it’s this, and another fabricator says it’s that, and the laminate vendor says it’s another thing. Which values can design teams trust? I‘d like to hear Happy’s opinion on this. But the common mythology is that those differences are due to individual fabricator processes, and that’s not true. It’s fundamentally illogical.
motion. You can have two degrees of freedom; you can have three degrees of freedom. Here we have at least four, if not five, degrees of freedom. We use HyperLynx field solver, but others may use different field solvers. There’s one set of Maxwell’s equations, but field solver A and field solver B might use slightly different meshing techniques. Now, do the field solvers usually agree? Yes, they do. But let’s say that the field solver is one degree of freedom.
same, or are they slightly different? The third is the operator as a degree of freedom: Person A versus Person B at two different fabricators. They’re not sitting side by side, comparing their work. That doesn’t happen until the stackup gets back to the OEM. So, that’s a third degree of freedom. They could be using different Dks and many
times they are—a fourth degree of freedom.
they’ll just say, “Oh, yeah, it’s 100 ohms.” As if everything was exactly 100 ohms right on
the button. And they won’t give you the Dk numbers. So you get all of this variation, and
that’s why I say you need to fence those cattle in. I feel bad because I’m referring to people as cattle, but it’s just a metaphor.
back a three-dimensional measurement.
characterizing materials because, the more expensive the material, the more stable and
less variation you’re going to get. If you’re sticking with FR-4, it’s a very good mechanical platform, but electrically, it’s all over the map.
might do five or more stackups in a day serving different customers. One of those stackups is the one I’m doing for you, the OEM. But you’re doing it the opposite way. You have one stackup that you’re doing, let’s say, with three different fabricators. In the end, who needs to own the divergent results? It’s the OEM that owns it. And, if they own it, they own the result.
and measure the raw laminate myself. Methodologies exist to do that. Our Z-field
product does that. And you can remove the uncertainties. You could say, “Well, look, I’ve
measured it myself from zero to 20 gigahertz. Here is the Dk and Df profile for this particular laminate construction.”
of hours laying out a circuit board and owning the CAD layout, then maybe dozens more hours using up to $100,000 worth of EDA tools, to design and simulate designs. Why don’t the same people that are putting that kind of investment into the board layout and SI and PI simulation process pay more attention to the spinal cord of their design—which is the stackup—and why do they delegate that to the fabricators? That’s a design-philosophy question.
have a number like 90% for plane layers. But your design has a different percent copper
on the various signal layers. Some CAD tools know that. Valor NPI knows that and those
numbers could be used in the stackup design flow to refine prepreg dielectric thicknesses and therefore get more accurate impedance numbers.
difference. Oh, it’s only two, two and a half percent different.” But those percentages are
eating into your tolerance-tolerance band. So, there’s the manufacturing tolerance and then there’s the engineering tolerance.
uncertainty in their designs. If you have firstpass prototype success, and production is all
done on schedule, you’ve reduced uncertainty down to an optimal level. I’m not really saying it’s a crapshoot; what I’m saying is, it’s our job as designers, engineers and design teams to reduce uncertainty wherever we can in our design flow. Just because I didn’t get bit by something on my last design doesn’t mean that I’m not going to get bit by the same issue on my next design.
it’s going to work tomorrow. That’s how my son thinks when he’s driving to and from college. He says, “I don’t get tickets.” Well, until he got a speeding ticket for going 20 over the speed limit. Anyway, I think we need to reduce uncertainty. That’s why I use the metaphor of getting the cattle back into your barn and keeping track of more things than you were keeping track of yesterday. Because when speeds increase, the margin of error decreases.
how they implement that standard. Some companies like Nan Ya, where both Happy and I used to work, have their own glass manufacturing. But other laminate vendors source their glass from multiple sources. That could be a source of variation.
that Xpedition and HyperLynx use. In our environment, there’s a lot more detail. For example, in the materials library, we have about 150 materials, and probably another 10 materials, I would say, by the end of this year. We have a lot more granularity, as it relates to stackup and materials, and we send that data to and from Xpedition or HyperLynx.
of Z-zero were planted in my mind back in my HyperLynx days. And I thought that a
tool should exist that does what my Z-planner Enterprise product does. I thought that if
speeds kept increasing, engineers would need to have a tool that handles the granular details of stackups. And that’s the journey that I’ve been on with Z-zero.
both pursuing the same goal: To take manufacturing knowledge and move it to the left
in the design process with an EDA tool that would handle all the uncertainties in stackup
design.