The AI security gap: a CTOs & CISOs guide to making their first AI investment

@taliamoyal's avatar on GitHub
Talia Moyal / Head of Outbound Product at Gitpod / May 20, 2025

AI coding tools are reshaping development and expanding attack surface areas

AI coding assistants have rapidly evolved from novelty to necessity. 97%1 of enterprise developers now use generative AI coding tools, gaining productivity but creating new security vulnerabilities. As AI tools become central to development workflows, they’re creating dual risks: developers unknowingly exposing sensitive data or merging insecure code, and attackers exploiting these tools to quietly infiltrate the software supply chain. AI has quietly turned development environments into the riskiest, least-governed part of the software supply chain.

The core issue is architectural: most AI tools run on local developer machines, places enterprises can’t see or control. Traditional environments like laptops or unmanaged VMs weren’t built for autonomous agents. They lack guardrails, visibility, and enforcement. AI has introduced a class of risks—insecure package installation, credential exfiltration, untraceable agent behavior—that move faster than traditional security measures can respond.

For example, AI-generated code often contains security flaws. A recent study showed 36%2 code suggestions from GitHub Copilot had vulnerabilities ranging from SQL injection to hard-coded secrets.  These errors happen at machine speed, without human context. At the same time, the AI agents themselves introduce new risks:

  • Data leakage: AI coding assistants often send code to external models. Samsung banned employees from using ChatGPT after engineers leaked internal source code. Major financial institutions like JPMorgan and Goldman Sachs followed suit, citing data protection concerns.

  • Supply chain attacks: AI agents know nothing about the inherent safety of a library or package. Attackers exploit the trust developers place in AI-generated suggestions to slip backdoors into code.

  • Privilege misuse: Agents that run locally can access sensitive credentials or systems through the developer’s own permissions. Prompt injection attacks have already demonstrated how AI can be tricked into leaking secrets or running unauthorized code.

In short, AI automates the attack chain, without safeguards, turning developer laptops into vulnerable endpoints.

What’s keeping your CISO up at night

These emerging risks aren’t theoretical. A growing list of real incidents shows how AI can amplify existing security gaps:

  • Insecure code suggestions: 36% of Copilot’s output contained security flaws, including hard-coded API keys.3

  • Malicious package insertion: Attackers can manipulate AI to recommend malicious libraries or create subtle backdoors4. This moves AI into target, as developers often implicitly trust the output of their agents.

  • Data leakage and privacy breaches: Samsung’s case exemplifies how code can be leaked, creating compliance risks for enterprises subject to GDPR, HIPAA, SOC2 etc. Organizations need to be aware of even internal AI services logging or caching sensitive data improperly on a local disk, which can get indexed or backed up in insecure ways.

  • Misconfiguration and overreach: AI often suggests weak passwords, outdated encryption, or insecure configs. These flaws can enter production if not caught by a human—yet many AI-generated PRs are merged without review5.

The Cloud Security Alliance and other bodies now recommend sandboxing and runtime validation for AI-generated code. The message is clear: if you’re using AI tools for development, you must isolate and monitor their behavior in real-time.

How enterprises in regulated industries can embrace AI securely

71% of organizations now use AI in software development—but nearly half do so without adequate controls. This is not because they don’t care about security. It’s because legacy environments weren’t designed to handle autonomous agents6.

AI enables faster coding and automated testing. But many teams allow AI-generated code into production without human review or policy enforcement. Developers use tools like Cursor or Copilot without visibility. Security teams can’t see what’s running, what dependencies are pulled in, or what secrets may be leaked.

Compliance concerns are slowing AI adoption. Financial institutions blocked ChatGPT not out of fear, but because they couldn’t guarantee it wouldn’t leak client data or introduce unauthorized code. European regulators have flagged similar risks. Some organizations respond by building internal LLM platforms but even those still run on local laptops, where secrets and code often coexist without guardrails.

The result? A patchwork of restrictions that frustrate developers and don’t satisfy security. What enterprises need is infrastructure that closes the ‘AI security gap.’ This is where Gitpod becomes critical.

Gitpod: the secure substrate for AI-assisted software development

Gitpod’s core principles of standardization, ephemerality, isolation, and policy-controlled environments have evolved from developer experience features to security necessities:

Standardized, policy-enforced setup

Gitpod enables pre-configured environments with enterprise policies baked in. You can:

  • Define which AI tools are installed

  • Restrict network access to only approved endpoints

  • Block access to sensitive files by default

  • Centrally patch vulnerabilities in base images, instantly updating all workspaces

Unlike laptops, where developers manage their own setup, Gitpod ensures every workspace adheres to approved configurations.

Fine-grained access and secrets management

Gitpod integrates with single sign-on (SSO) and role-based access control. Access to source repositories mirrors the exact permissions of users in Git providers. More importantly, Gitpod integrates with secret management and ephemeral credentials, fetching temporary, least-privilege credentials when workspaces start. No more storing AWS keys in configuration files that AI scripts might accidentally leak. If an AI process attempts to access unauthorized resources, it won’t have the credentials unless explicitly provided. This principle of least privilege is challenging to enforce on personal machines but natural in centrally orchestrated environments.

Visibility and enforcement

Security starts with visibility. Gitpod tracks everything: environment starts, package installs, shell commands, and more. But visibility alone isn’t enough.

Gitpod also enables enforcement:

  • Prevention: block risky package installs, restrict outbound network traffic, or prevent access to internal APIs.

  • Remediation: detect obfuscation and exfiltration attempts or unusual behavior and shut down compromised environments in real time.

And for threats that only emerge when code is actually running, Gitpod supports runtime visibility, a critical layer for identifying dynamic threats like memory-based attacks or condition-triggered behaviors. Runtime security is an evolving area, but one that’s especially relevant as AI-generated or agent-executed code becomes more common.

Together, visibility, enforcement, and runtime awareness create a zero-trust foundation for AI development.

Isolation and sandboxing

Each Gitpod environment runs in a container isolated from the user’s machine and corporate network. This isolation is ideal for AI-assisted code – if an AI agent malfunctions, the damage remains contained. Nothing on the user’s actual laptop is affected or production systems, and with proper network policies, the environment cannot make unauthorized external calls. Gitpod runs development environments in isolated, ephemeral containers inside your cloud VPC (for enterprise plans), following a zero-trust model with deny-by-default networking. Even if an AI attempts something unauthorized, it can’t access restricted resources.

Ephemeral environments

Environments are short-lived or reset on start, greatly reducing the risk of AI agents leaving behind malicious payloads or insecure configurations. Each environment starts with a clean state using approved base images and tools. Any change the AI makes remains confined to that session. If something seems wrong, you can simply destroy it without worrying about persistent malware. Source code stays in your VPC, never residing on developer devices if you don’t want it to. This mitigates data leakage and if an AI accesses sensitive information, it’s not stored unencrypted on hard drives but in environments that can be audited and wiped clean.

Gitpod closes the AI security gap by providing a controlled, transparent, and resilient platform for AI-assisted development. Instead of trying to secure each developer’s machine (an unwinnable challenge), you shift to a platform with built-in security. Gitpod provides the underlying environments to run SWE agents in isolation, securely, while enabling collaboration and maintaining developer speed.

The business case: productivity, security, and compliance ROI

Securing AI-assisted development isn’t just about risk reduction, it’s a major ROI driver. Adopting an approach to securing, standardizing, and automating your development environments through Gitpod isn’t just about avoiding security breaches, it also delivers business value and costs savings. Fortune 500 companies have been using Gitpod as a way to drastically improve development velocity for years.

Developer productivity & onboarding

Gitpod eliminates setup friction, saving ~200 hours per developer per year. Teams report 75% faster onboarding, reducing time to first PR from 10 days to 2–3. Developers spend more time reviewing AI contributions and guiding architecture, less time fixing environment drift.

For a 100-developer org, that’s 24,000 hours saved—worth ~$2M in annual output gains.

Cost savings vs. legacy solutions like VDI

Gitpod replaces costly VDI systems and laptop-heavy security postures. One enterprise saved 60% by moving from VDI to Gitpod, while also improving developer satisfaction.

Gitpod extends device lifespans and reduces endpoint risk, enabling thinner, cheaper clients.

Improved security posture

The average breach now costs $4.45M (IBM data). Gitpod helps enterprises avoid this by:

  • Keeping source code off local disks

  • Containing AI agents inside controlled environments

  • Logging activity for forensic audits

  • Simplifying compliance for SOC2, ISO 27001, and GDPR

One Gitpod customer in fintech used it to automate security guardrails without slowing developers—and maintain full auditability during ISO 27001 certification.

Developer experience and retention

Security doesn’t have to mean friction. Gitpod provides one-click workspaces, instant onboarding, and seamless AI integration. Developers stay productive, and teams report 10% less attrition.

By treating AI as a productivity multiplier not a security liability Gitpod lets you move faster, without losing control.

AI is only as powerful as the environment it runs in

AI is rewriting how software gets built. But without the right environment, that velocity becomes volatility. Gitpod gives organizations the only thing that scales with AI: a secure, consistent platform built to run agents, ship code, and maintain control.

You don’t need multiple tools stitched together to secure, standardize, and scale AI development. You need a single platform that developers love and security teams trust. Gitpod is that platform. Try us today for free.


Resources used in this research:

Standardize and automate your development environments today

Similar posts