Beware, AI Coding Can Be a Security Nightmare

3 weeks ago 13

AI coding is picking up the pace, but so are the risks associated with it.

ai coding security risks

Illustration by Nalini Nirad

Developers have been embracing vibe coding and AI-assisted programming, with some fully trusting AI coding tools to handle everything, while others rely on them only partially. Surprisingly, one-quarter of YC founders admit that 95% of their codebase is AI-generated.

However, there are major downsides to coding with AI. While vibe debugging is one part of the problem, it is not limited to it—the AI-generated codes also introduce security issues.

AI Coding May Be Cool, But You Need Security Understanding

Recently, an X user deployed Cursor to build a SaaS app and emphasised that AI was not just an assistant but also a builder. A few days later, he shared that someone was trying to find security vulnerabilities in his app. The next day, he took to X and said he was under attack. 

Furthermore, he acknowledged that resolving the problem was taking a considerable amount of time, as he lacked the necessary technical knowledge.

Several app developers stepped in to help, suggesting what could have gone wrong and providing potential solutions to fix the issue. The hackers targeting the app took interesting approaches to send a message to the app builder. For example, the creator shared a screenshot that displayed a domain that said “please_dont_vibe_code.ai.” 

Certainly, the hackers had their opinions when trying to exploit the app’s vulnerabilities. Santiago Valdarrama, a computer scientist, took to X and stated, “Vibe-coding is awesome, but the code these models generate is full of security holes and can be easily hacked.” 

AI Code Adds Security Risks

Amlan Panigrahi, a GenAI engineer at Deloitte, told AIM, “It can be a security concern for organisations working on production environments. However, for a prototype with generic/open source data sets exposure, it doesn’t pose a problem.”

He further advised that developers should consider the security implications of their organisation’s nature of business if they intend to utilise coding Copilot assistants.  As an alternative, they could customise and provide API endpoints to LLMs hosted on trusted or self-hosted infrastructure to power these coding assistants.

Chetan Gulati, a senior DevSecOps engineer at Fraud.net, spoke to AIM regarding this. “AI coding does present significant security challenges. Generative AI, at its core, is an advanced sentence completion system, making it susceptible to prompt injection attacks that could introduce sensitive details or vulnerable code into a system,” he said.

He further said that AI models often rely on outdated third-party libraries, as they are trained on historical data rather than continuously adapting to the latest security patches and best practices. “This can lead to the inadvertent use of deprecated or insecure code, further amplifying risks.”

Raising concerns about the whole ‘vibe coding’ trend, Gulati noted that the dependence on AI-generated code without understanding its functionality can lead to security vulnerabilities, misconfigurations, or compliance issues, as developers may not be able to properly assess or secure the generated code before implementation.

The same was corroborated by a report by application security platform Apiiro. The report claimed that AI code assistants were indeed becoming popular, and code output had increased over the past two years. However, the growth came with risks like APIs exposing sensitive data.

It also stated that repositories containing personally identifiable information (PII) and payment data have increased 3x since Q2 2023. Moreover, there has been a 10x surge in APIs with missing authorisation and input validation over the past year.

A recent research report compared human and LLM-generated codes and stated, “It is vital to concentrate on creating methods for vulnerability evaluation and mitigation because LLMs have the power to spread unsafe coding practices if trained on data with coding vulnerabilities.” It further stated that LLMs may unintentionally introduce security flaws.

The research concluded that there are security vulnerabilities in both human and LLM-generated code, though the flaws in AI-generated code were found to be more severe.

Another research report by the Centre for Security and Emerging Technologies (CSET) found that AI-generated code across five LLM models contained bugs that are often impactful and could potentially lead to malicious exploitation.

A user on X mentioned that his friend’s app got hacked while building with Cursor and Bolt.

AI Coding Assistants Need Work Too

Several developers and security researchers have highlighted that certain features of AI code assistants, such as Cursor, could present a security risk. One developer mentioned on Cursor’s forum that internal company secrets might have been leaked to external servers, including those of Cursor and Claude, while using the assistant. 

Features like autocompletion and agent interactions access and utilise the contents of .env files, even when these are explicitly excluded in .gitignore and .cursorignore. Some users could reproduce the issue in the forum and confirm the claim. 

A user on X mentioned that if not careful, Cursor AI can delete folders anywhere, change OS settings, steal crypto wallets, and overwrite important configuration files.

Therefore, before diving into the use of AI for code generation, it appears necessary to have an understanding of security, whether you are coding with a relaxed approach or simply employing a code assistant for assistance.

Picture of Ankush Das

Ankush Das

I am a tech aficionado and a computer science graduate with a keen interest in AI, Open Source, and Cybersecurity.

Association of Data Scientists

GenAI Corporate Training Programs

India's Biggest Conference on AI Startups

April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Startups Conference.April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru, India

Data Engineering Summit 2025

May 15 - 16, 2025 | 📍 Hotel Radisson Blu, Bengaluru

MachineCon GCC Summit 2025

June 20 to 22, 2025 | 📍 ITC Grand, Goa

Sep 17 to 19, 2025 | 📍KTPO, Whitefield, Bengaluru, India

India's Biggest Developers Summit | 📍Nimhans Convention Center, Bengaluru

India's Biggest Summit on Women in Tech & AI 📍 Bengaluru

Read Entire Article