Posts

Will AI Replace Chip Design Engineers? The Truth About Job Security & Innovation

We opened our DVCon US ’26 Birds of a Feather session with the most electrifying—and anxious—question in the industry today: Is AI going to take your engineering job? To answer this, we turned to Clifford Cummings , world-renowned HDL Synthesis trainer, and Yatin Trivedi . Their consensus was a much-needed reality check: The tools are changing rapidly, but the need for fundamental engineering expertise is more critical than ever. The Trust Gap and the Junior Engineer Dilemma One of the biggest risks discussed in the panel is the assumption of correctness. AI tools can, and often do, get things "completely wrong." There is a growing concern that junior engineers, impressed by the speed of generative AI, often assume the output is correct and submit it without proper verification. This creates a dangerous Trust Gap . You cannot fix what you do not understand. Maintaining a strong foundational background in chip design is absolutely essential to identify, de...

One Data Set, Four Different Scores: The AI Consensus Problem

In our previous installment, we looked at the brutal $10-per-query economics of AI-EDA. But even if the cost of inference drops to near zero, a deeper technical question remains: Can you actually trust the answer? Continuing our series from the DVCon US ’26 Birds of a Feather session, Yatin Trivedi , Head, Semiconductor Center of Excellence (CoE) at Capgemini Engineering, shared the results of a high-stakes experiment that serves as a reality check for the industry’s "AI-everything" race. It highlights a massive maturity gap that every verification lead needs to understand before they integrate LLMs into their sign-off flow. The Experiment: Grading the Plan Yatin’s team conducted a controlled trial: they took a final design specification and a manually crafted, human-verified verification plan. They then asked four leading AI platforms to "grade" that plan for completeness. Could the AI identify gaps in the testing strategy before tape-out? I...

The $10 Query: The Compounding Cost of AI Hallucinations

This first installment of our DVCon US ’26 video series dives straight into a reality check that often gets lost in the hype of generative AI: The cost of a single "thought." In this segment, Srini (AsFigo) sits down with Asif ETV (HPCINFRA) to discuss why the transition to AI-native chip design isn't just a software challenge—it’s a massive infrastructure and economic hurdle. Watch the Full Segment :  In this 3-minute clip, watch Asif and Srini break down the economic reality of the modern AI-EDA stack and why optimizing for the right infrastructure is the only way to stay within budget while bringing AI into production. The Cost Explosion: Doing the Math on Hallucinations When we talk about "Agentic AI" in EDA, we imagine autonomous loops of design and verification. But as Asif ETV points out, the meter is always running, and it’s running faster than many teams realize. Sharing a startling metric from recent infrastructure experiments, the m...

Solving the $100k AI-EDA Bottleneck: Insights from DVCon US ‘26

If you missed the Birds of a Feather (BoF) session at DVCon US '26 in Santa Clara, hosted by AsFigo , you missed a critical conversation about the structural barrier facing silicon design: AI is currently too expensive to fail. While Large Language Models (LLMs) have become proficient at generating Verilog syntax, the industry is hitting a wall we call the $100k AI-EDA Bottleneck . To move from "cool demos" to production-ready chip design collaterals (RTL, SVA, UVM, SDC, Synthesis, Physical design scripts all the way upto GDS-II), we must solve the hallucination problem without bankrupting the project on EDA licenses. The Problem: The "EDA License Tax" on Reasoning Agentic AI—systems that autonomously iterate on design and verification—thrives on a "Trial-Error-Correct" loop. An agent might require thousands of simulation or linting runs to prune logic hallucinations and refine a complex IP block. In a traditional workflow, every on...

UVM on Verilator: Mapping the Minefield (The Technical Deep Dive)

While our previous posts focused on the strategic "Why," this post addresses the "How." Porting 50+ production-grade UVCs wasn't a matter of simple recompilation. It was an exercise in uncovering the delta between commercial simulator "permissiveness" and the strict reality of the SystemVerilog LRM and the Verilator execution model. Here are the some of the primary technical roadblocks we mapped and cleared.  You can learn lot more at our upcoming event on Thursda y, get your tickets today:  https://buytickets.at/asfigo/2091884   1. Strict LRM & Parsing Enforcement Commercial tools have historically been "lazy" with LRM enforcement. Verilator is not. Code that has run for years in proprietary silos often fails immediately here due to: SVA: Basic temporals do work, not complex ones. We have ported AHB, APB and AXI4-Lite SVA IPs to Verilator, so though this sounds very limiting, in reality, it is not that bad! Range Syntax: Not so...

Cracking the UVM-Verilator Code: 50+ IPs, AI Guardrails, and the Open-Source North Star

UVM on Verilator Isn’t Impossible. It’s Just Hard. We Did It Anyway. The “standard” line in the industry: UVM and Verilator don’t mix. Enterprise-grade open-source verification is a fantasy. So we put that claim under load. ✔ 50+ production UVCs ported ✔ 35 + non-obvious failure modes uncovered ✔ Converted into UVMLint and SVALint rules ✔ AI guardrails to prevent regressions This post is the why and what; the how and numbers are for the room at Hacker Dojo.  This is production-grade code—the kind that tapes out chips—running on open-source infrastructure. The implication is bigger than a port: Reduced dependency on closed ecosystems. Negotiating leverage with EDA vendors. Strategic control over your verification stack. If you’re a semiconductor company exploring Verilator for serious IP, don’t start from scratch. Start from the team that already mapped the minefield. 📅 Join us at the Hacker Dojo Next Thursday, the AsFigo team...

Solving the $100k License Tax: A Convergence of Industry Perspectives on Agentic AI for chip design

In our previous post , we discussed the gap between AI "demos" and production-grade silicon. Today, we address the primary friction point preventing the industry from scaling AI-driven verification: The License Tax.  Registration:   https://forms.gle/WX6Hqwew2HfwDFmF8  The Math of Iteration "Agentic AI"—where autonomous loops generate, test, and fix RTL—relies on high-frequency iteration. For an AI agent to "learn" a fix or optimize a module, it may need to run hundreds or thousands of cycles through a verification engine. In a traditional environment, this is an economic impossibility. If every iterative trial pings a commercial tool seat—often costing upwards of $100,000 per license —the cost of the "agentic workforce" scales faster than its productivity. To make AI ROI viable, we must decouple the high-volume iterative cleaning from the high-cost final sign-off. The Production Milestone: Verilator + UVM is becoming a ...