Full-stack power integrity simulation
PDNLab™ is a full-stack power delivery network simulator built for the realities of modern AI accelerators. As chips like the H100 push hundreds of amperes through ever-thinner metal layers, cumulative voltage droop (di/dt) has become the dominant threat to performance and reliability.
PDNLab models the entire power delivery path — from voltage regulator through board, package transmission lines, through-silicon vias, on-chip metal grids, and decoupling capacitance — as a true physical system. It captures the spatiotemporal dynamics of current transients at every layer, giving designers detailed insight into where and when voltage integrity degrades.
More than a static IR-drop checker, it is a system-down-to-IP-block power integrity environment: a design methodology that brings PI awareness to the front end of the design process and quantifies the impact of floor plan, metal resources, and decap placement on dynamic droop under realistic AI workloads.
PDNLab doesn't produce static voltage maps. It generates a continuous voltage surface evolving through time — revealing how L·di/dt transients nucleate, propagate across the die, and recover.
The voltage at every point on the die is computed as a continuous field — not sampled at discrete nodes. Spatial gradients, droop wavefronts, and recovery dynamics are all visible.
Play back the voltage surface through time to see how transient current events driven by L·di/dt propagate voltage disturbances across the PDN. Pinpoint the exact moment and location of worst-case droops.
Watch voltage droops nucleate at high-current blocks and radiate outward through the power grid. Understand how PDN impedance, decoupling placement, and grid topology shape propagation.
PDNLab integrates into the chip design flow at the floorplan stage — early enough to influence architecture decisions before metal is committed.
RTL compiled to gate-level netlist. Power budgets estimated for AI compute blocks.
Functional blocks placed on die. Tensor cores, HBM stacks, and regulators positioned.
Complete board-to-transistor simulation. Identify droop hotspots before tapeout.
Power grid metal routing finalized with PI-informed constraints from PDNLab.
Parasitics extracted from routed design. PDNLab insight means fewer surprises.
Confident power grid verification. Droop hotspots already mitigated.
Watch the voltage continuum surface evolve through time; pinpoint droop hotspots spatially and temporally.
Capture realistic L·di/dt transients from GPU kernel launches, memory bursts, and phase transitions.
Complete power delivery: on-chip grid, package substrate, board PDN, and VRM — one unified simulation.
RTL is compiled and mapped to a gate-level netlist. Power characterization starts here — estimating peak current draw for AI compute blocks and modeling switching activity across thousands of parallel execution units. The PDN budget begins to take shape.
Functional blocks are placed on the die — high-power tensor cores adjacent to HBM stacks, voltage regulators near hotspots. This is the ideal moment to run PI analysis: relative positioning determines the severity of L·di/dt transients.
After PDNLab analysis, optimized capacitance placement and refined switching current estimates feed back into layout. Power grid metal routing is finalized with PI-informed constraints — ensuring the mesh can handle peak di/dt from AI workloads without catastrophic voltage droop.
Parasitics are extracted from the routed design. If PDNLab was used at floorplan, sign-off convergence is dramatically faster — fewer ECO cycles, predictable droop margins, and confident voltage budgets even for the most demanding AI accelerator designs.
Final power grid verification against voltage drop specs. Designs that leveraged PDNLab at floorplan rarely encounter surprises here — droop hotspots were identified and mitigated long before sign-off.
Define your chip's power delivery architecture, assign workloads, and see results, all in one environment.
Place chip grids, via arrays, bump patterns, package planes, and board elements. Set physical parameters (wire widths, sheet resistance, inductance) for each layer.
Create current source profiles that model real workload behavior: clock-edge switching, kernel ramp-ups, burst events. Assign profiles to chip regions and define scenarios.
Run the simulation in seconds. Visualize voltage surfaces across every layer, animated over time. Compare scenarios, iterate on design parameters, converge on your target.
NVIDIA H100 GPU — full package-level power delivery network with 3,421 elements, 31 simulation scenarios, and AI-assisted analysis. Drag to rotate.