-

Enabling Earth Blox to Scale AI Adoption
Client context
Earth Blox is a climate tech and geospatial analytics platform serving financial institutions, enterprises, and sustainability teams. As a SME with a lean structure, the team had an ambition to use AI effectively to help them accelerate and scale the business.The challenge
Earth Blox’s leadership recognised that AI would be strategically important to their next phase of growth. AI experimentation was already happening across the business, with strong curiosity and initiative from developers, product managers, and commercial teams.As usage grew, different teams were drawing insights from similar tools, but without shared standards for prompts, workflows, or review.
This meant:
• Ideas and designs took longer than necessary to move into production.
• Product, design, and engineering spent time aligning outputs.
• Commercial and customer preparation remained more manual than it needed to be.
AI was recognised as high-potential, but without a common framework, leadership wanted greater clarity on how to scale usage safely and consistently across the organisation.
Our approach
Vaul Labs ran a two-day, in-person workshop with the Earth Blox team.
Day one centred on best practices, advanced prompting, human-in-the-loop control, and helping both technical and non-technical teams understand AI’s limits and failure modes.
Day two applied these principles across real Earth Blox product, design, and engineering workflows.
Rather than prescribing rigid processes, we worked with live code and existing systems to design repeatable, AI-assisted ways of working. The focus was on practical guidance around when to trust AI (and when not to) with explicit emphasis on sustainability, consistency, and adoption across the whole team, rather than isolated productivity gains.
Scope & constraints
Earth Blox required AI adoption to be ISO 27001 compliant, operate with a Google-first ecosystem, and function with explicit human oversight. No black-box automation was introduced - only AI that teams could understand, review, and trust.
Outcomes“Our challenge was figuring out how to use AI effectively without slowing teams down or creating inconsistency. The workshop helped us align around practical use cases, establish better ways of working, and significantly reduce friction between functions. We left with working examples, clearer processes, and a faster path from ideas to execution.”
Genevieve Patenaude CEO, Earth Blox
From isolated experimentation to structured AI workflows
• Developers progressed from using Copilot as autocomplete to structured, agentic workflows with clearer intent and review.
• Product and design reduced rework by aligning earlier on prompts, constraints, and expected outputs.
• Non-technical teams identified specific automation opportunities directly tied to commercial and customer outcomes.
Clearer decision-making and shared ownership• Leadership gained clarity on where AI adds leverage: and where it does not.
• Tooling and workflow decisions accelerated, with explicit ownership of next experiments and measurable use cases.
• Fear of “doing AI wrong” reduced, replaced with practical guardrails and confidence.
A scalable foundation• Earth Blox established a shared AI playbook documenting best practices, anti-patterns, and governance principles.
• This created a repeatable onboarding path for new hires and a platform for compounding AI capability over time.
Earth Blox’s experience reinforces something we see often: AI scales when it becomes part of how teams operate day to day. Shared standards, clear ownership, and practical judgement are what allow capability to build over time.If you’re exploring how AI adoption can scale safely in your organisation, we’re always happy to compare notes.
——————————————
More case studies coming soon.