By Chris J. Preimesberger
Copyright thenewstack
Artificial intelligence is no longer a mere curiosity inside enterprise developer stacks — it’s becoming the nervous system of how software gets built, secured and released. JFrog, long known as a pioneer in artifact and release management, is now betting that enterprises will need not only more powerful AI tools, but also frameworks for trust, governance and control.
At its swampUP 2025 conference this week, JFrog rolled out a sweeping set of new features and products designed to help enterprises tame the complexity of modern AI-infused software supply chains. In a wide-ranging conversation with The New Stack, JFrog CTO and cofounder Yoav Landman outlined the company’s four big initiatives:
Trust and Governance (“DevGovOps”): Evidence-based controls for software releases.
AI Catalog: A unified, governed marketplace of AI and machine learning (ML) models.
Agentic Code Remediation: Automated vulnerability fixing through large language model (LLM)-driven assistants.
JFrog Fly: An “agentic repository” redefining versioning and release management for the AI era.
Taken together, the moves underscore JFrog’s strategy to remain the central nervous system of enterprise software pipelines, even as AI shifts the very definition of what code and releases look like.
Reinventing Software Governance With Evidence
Landman described it as perhaps the most strategic announcement: JFrog’s first step into release governance, which the company dubs DevGovOps. The idea is to move beyond managing binaries and builds into managing the very evidence of how those builds were created, tested and secured.
“We saw at the stations as signed evidence alongside the release artifact,” Landman said. “Artifactory is no longer just the single source of truth for binaries, but also the system of record for all the metadata that testifies to how those artifacts were tested and checked.”
This means enterprises can enforce evidence-based policies at every stage of their software supply chain. For example:
If an artifact hasn’t passed a security scan, it won’t be promoted to staging.
If an open source scan isn’t complete, the release won’t advance to production.
Custom compliance evidence from internal APIs can also be integrated.
JFrog is launching this with an ecosystem of 12 partners, including GitHub, Atlassian and SonarSource. External governance vendors can push their attestations into Artifactory, creating a single policy framework.
“This is a framework that’s open and customizable,” Landman emphasized. “It’s governance that can automatically decide whether a release can advance — or flag issues for developers before release.”
The concept is especially critical as AI increasingly generates code. Whether a human or an LLM wrote it, governance policies remain the same. “At the end of the day,” Landman said, “you need the same level of trust and control.”
The AI Catalog: Controlling the Model Explosion
If governance is about trust, the second initiative is about curation. Enterprises are drowning in models — open source, proprietary, Software as a Service (SaaS)-delivered and custom fine-tuned. Adoption is often slowed by mistrust, lack of visibility and security concerns.
To address this, JFrog is extending its ML platform with the AI Catalog, a unified inventory of models — both packaged downloads and service-based APIs from providers like OpenAI.
“The problem we see is slower-than-expected adoption of AI models because of trust issues,” Landman said. “So, we’re bringing a unified catalog to control them all — open source, proprietary, API-based — in one place.”
The catalog lets enterprises curate models based on licenses, maturity and organizational policies; scan models for malicious behavior or vulnerabilities; apply project-specific permissions (for example, allowing one team to use DeepSeek while another is restricted to OpenAI); and enable proxy service-based models through JFrog’s runtime gateway, enforcing policies even on SaaS models.
Unlike competing tools, JFrog’s catalog includes both hosted models and external ones, giving enterprises a central control point. “It’s not just about finding models,” Landman said. “It’s about controlling their usage and trust level across the organization.”
Agentic Code Remediation: Auto-Healing for Security
Perhaps the most futuristic announcement is agentic remediation — what Landman called “auto-healing your code.”
JFrog is starting with GitHub Copilot as its IDE partner. As developers code, JFrog’s advanced security scans detect vulnerabilities or risky patterns. Instead of merely flagging issues, JFrog feeds high-quality remediation instructions, drawn from its research data, directly into the AI agent.
The result: The agent fixes the issue automatically. Developers see diffs and can accept or reject them, but most of the time, the fixes are spot-on.
“This is unique — nobody does this,” Landman said. “We have expensive research data that previously required humans to apply fixes. Now, we feed that to the LLM so it can act as a remediation agent in real time.”
The system goes beyond known common vulnerabilities and exposures (CVEs). It detects and fixes newly created vulnerabilities in the developer’s own code, effectively giving every developer an embedded AppSec expert.
“It’s like having a security expert sitting on your shoulder,” Landman said, “but one who not only points out problems, but writes the fix for you.”
JFrog Fly: Toward a World Without Versions
Finally, JFrog is unveiling JFrog Fly, which Landman calls the first agentic repository. This is both a new product and a new philosophy: the end of traditional versioning.
“When you release several times a day, the act of creating a version becomes a bottleneck,” Landman said. “We coined the phrase: ‘Imagine there’s no version.’”
Instead of rigid semantic versioning, Fly allows developers — or their AI agents — to interact with releases semantically. Developers can ask in plain English for “the latest secure release,” or “the build from last Tuesday with X feature,” without tracking version numbers.
Fly will debut as a beta waitlist offering, initially targeting AI-native teams willing to experiment with new release management paradigms. It will later expand across platforms.
The product reflects a deeper shift. “The role of the release manager is fading,” Landman said. “We’re moving toward liquid software that’s continuously flowing forward, where versioning as we know it is becoming obsolete.”
Navigating the AI Hype Cycle
JFrog’s announcements arrive as enterprises wrestle with the reality that most AI projects fail. Studies suggest 80-90% of initiatives never reach production. Landman acknowledged the dilemma.
“There are two sides,” he said. “One is the approval and trust of AI models themselves. The other is finding real use cases that bring authentic value.”
Many projects, he argued, were launched for hype’s sake. “We didn’t want to just slap ‘agentic’ on our platform. It took time for tools, models and use cases to mature. Now we’re entering a wave of true agentic applications.”
Landman described the balance as a “sweet spot between full autonomy and trust.” AI should be given autonomy similar to a human teammate, but only within evidence-backed guardrails.
“This is what everyone is trying to achieve — delegating to AI, but with frameworks that preserve trust,” he said.
Partner Ecosystem, Real-World Integrations
The power of these announcements lies not just in the tools, but in the integrations. At its launch event, JFrog will showcase partners. GitHub will demonstrate how build provenance is stored alongside artifacts, and ServiceNow will show how change management approvals sync between ITSM systems and JFrog governance.
These integrations reflect JFrog’s philosophy: Enterprises already use multiple tools for compliance, governance and DevOps. JFrog positions itself as the evidence system of record, tying everything together.
“Software is moving so fast that versioning itself doesn’t make sense anymore. We need new tools and new ways of thinking,” Landman said.
For enterprises, the stakes are high. AI adoption will stall without trust, governance and automation. JFrog is betting that its platform can be the foundation for that trust — making software pipelines both faster and safer.
Analyst’s Perspective
The New Stack asked IDC analyst Jim Mercer for his take on this news.
Q: These are pretty advanced AI dev tools. How do these rank among available toolsets, as you see it, and why?
A: “Although they acquired the Qwak MLOps platform in 2024, JFrog’s approach to AI development seems to be to stay close to their core strength around managing my application and all the ‘stuff’ that goes in them. Its core strength lies in Artifactory, and the company has extended these capabilities to cater to the AI/ML life cycle.
“JFrog already had some AI/ML tools to help with managing models, akin to how they manage other software artifacts. The new AI Catalog can potentially help organizations build more secure AI applications by managing the AI life cycle and providing a single source of truth for all AI models. Although there are competitive AI solutions like Vertex and SageMaker, JFrog is less of a direct competitor and more of a complementary tool. Amongst more traditional competitors such as Nexus or GitLab within the repository space, the JFrog AI catalog is uniquely designed for managing models, and it provides governance and offers several interesting capabilities, thanks to the partnership with NVIDIA NIM.”