While much of the technology industry fights AI regulation, Atlassian’s General Counsel, Stan Shepard, is making a different calculation. He believes that companies who comply early will win the trust war that determines enterprise AI adoption.
In conversation at Team ’25 Europe in Barcelona, Shepard outlined a strategy that treats regulation as a market advantage rather than an innovation blocker. This puts Atlassian at odds with peers such as OpenAI and Meta, which argue that early or heavy-handed regulation could slow AI innovation.
Shepard explains, “The success of AI for us and our customers is really around being able to trust it. And fortunately or unfortunately, I think that trust needs a little bit of a carrot, in the sense of there has to be a law that we are aiming to meet, and without that law, sometimes, I don’t think industry necessarily will do the right thing.”
### Smart Regulation as Partnership, Not Constraint
Shepard distinguishes between “smart regulation” and “regulating technology for the sake of regulation,” with the key difference coming down to collaboration.
He elaborates, “It’s going to be a partnership between industry and lawmakers, and we have a very large role—Atlassian and some of our peers, I think—in helping regulators and lawmakers understand the technology that they’re trying to regulate.”
This approach helps policymakers focus on real risks—such as high-impact uses in hiring or performance reviews where life-changing decisions are made—versus lower-risk applications.
Atlassian demonstrates this through early adoption of the European Union (EU) AI Pact, an optional early compliance regime. Shepard notes, “We thought it was just a really great opportunity for us to be leading the pack. It was very practical. It was very, you know, here’s the things you have to do to meet it. More importantly, it aligns with Atlassian’s principles of transparency and customer focus.”
### The Legal Team That Broke the Adoption Curve
Inside his own organization, Shepard points to concrete results. He cites a recent industry survey suggesting that legal teams rank among the slowest adopters of AI.
Atlassian’s legal team has achieved 80-90% daily active users for AI tools, challenging the narrative about AI readiness in traditionally conservative functions.
“I’m so proud the Atlassian legal team has flipped the script on that,” Shepard says.
Three factors drive this success:
– Quality products like Atlassian’s Rovo
– Cultural alignment with being “an innovative legal team” that works like the engineering and product teams they support
– The nature of legal work itself
He explains, “If you think about the legal profession, similar to journalism, it’s all about words. You know, constantly, words have meaning, and every word happens on a page. For us, generative AI is perfect.”
Applications range from contract drafting to document summarization to translation. Shepard adds, “Gone is the day I think of staring at a blank page and being like, I need a contract.”
The main challenge is training and change management. His philosophy is to “go slow to go fast”—invest time in learning now, so teams move faster within a few months.
### Defining Guardrails Beyond the Buzzword
The term “guardrails” often floats by without substance—many claim to have them, but few explain what they are. Shepard is one of the few who does.
He breaks guardrails down into three categories:
#### Hard Guardrails
Non-negotiable legal boundaries. “Those are the ones where I come down firmly, which is, like, we will not cross that line,” Shepard states.
This includes both new AI-specific laws and existing regulations around privacy, security, and data protection that now have new application due to AI technology, but the laws themselves have been around for many years.
#### Industry-Specific Guardrails
These vary by sector and customer type. For regulated industries such as government, banking, and healthcare, there are additional protections around personal information that don’t apply universally but must be respected contextually.
#### Ethical Guardrails
Voluntary standards that go beyond legal requirements. Shepard cites deepfakes as an example where Atlassian might impose restrictions not because they’re legally mandated, but because “it’s the ethical right thing to do.”
This represents the difference between “AI that’s actually utopian and creates a world that we want to live in, and not dystopian.”
By breaking the idea down into legal, industry, and ethical layers, this framework moves the conversation from abstract principles to operational decisions that engineering teams can work with, compliance functions can audit against, and customers can evaluate.
### Integrating Responsible Tech into Development
Atlassian’s responsible tech review process is integrated directly into development workflows through a standardized template, showing how ethical frameworks survive contact with shipping deadlines.
Shepard acknowledges that version one “was not the perfect version,” and engineering feedback focused on efficiency concerns like redundant questions and excessive depth for lower-risk use cases. The response was to iterate.
He elaborates, “We have a great relationship with engineering. They definitely understand the why—why this is important, why responsible tech is critical to shipping products that customers will trust. So it really just comes down to the how.”
The revised approach uses threshold questions that calibrate review depth to risk level, streamlining the process for lower-stakes features while maintaining rigor for high-consequence applications.
### Building Technical Capabilities to Address Enterprise Concerns
Beyond process and policy, Atlassian is building technical capabilities that address enterprise concerns directly.
The launch of Atlassian-hosted Large Language Models (LLMs) responds to customers who “don’t want their data to leave the perimeter of Atlassian control,” with particular emphasis on data residency requirements for European customers.
Shepard sees the European Union AI Act as the new “high-water mark” for global regulation, much as the General Data Protection Regulation (GDPR) set the privacy standard.
The strategy is simple: aim high, then adjust around the edges.
### My Take
Atlassian is treating regulation not as a constraint but as a product feature. Shepard’s legal team is effectively prototyping what “trust-led AI” looks like inside a fast-moving software company, turning compliance into a design discipline.
What stands out is how Atlassian translates broad ideas—trust, responsibility, ethics—into frameworks that engineers can actually build against. Shepard’s guardrails model shows that clarity isn’t just moral hygiene but also an operational advantage.
The results are unusually solid: near-universal AI adoption in a department usually allergic to risk; a three-tier guardrails model that translates ethics into engineering language; and review processes that evolve through developer feedback rather than stall because of it.
This represents a different kind of competitive logic for enterprise AI. Shepard believes that credibility will compound faster than novelty—that companies building for the law’s high-water mark will outpace those chasing the next shiny feature.
Regulation, in this view, isn’t the drag coefficient of innovation; it’s the stabilizer that lets it scale.
Across Atlassian’s Team ’25 stories, there’s a consistent common theme: whether it’s developer experience, product design, or legal governance, the company treats trust as an engineering problem—something you build into the system, not something you retrofit with slogans.
https://diginomica.com/team-25-europe-atlassian-wants-ai-regulation-adoption-rate-shows-matters