Apr 30, 2025

The Future of Programming: Vibecoding with Artificial Intelligence

Generative language models (AI) have ushered in a remarkable transformation in software development in recent years. Tools such as GitHub Copilot, Cursor, Gemini Code Assist, and all-rounders such as ChatGPT, Claude, and Le Chat impressively demonstrate that AI has long been capable of generating complex code from simple text inputs. These models, trained on huge amounts of publicly available code, now offer support for reviewing, adapting, or even completely developing software. A new phenomenon is particularly noticeable here: vibecoding. This approach relies almost entirely on AI-assisted code generation, opening the doors to the world of programming even for people without in-depth technical knowledge. Pandora’s box has been opened, and where years of learning were once required, a well-formulated prompt is often all that is needed today – AI does the rest. But despite all the fascination with this new ease of software development, the risks must not be ignored. The danger of using AI-generated code without really understanding it is real. Bugs, security vulnerabilities, or outdated practices can creep into projects unnoticed, which can have serious consequences, especially in professional environments where maintainability, scalability, and security are essential. Legal issues also arise, for example in relation to copyright. In particular, the question arises as to who owns the code generated by AI and how the risks can be mitigated.

Copyright Pitfalls When Using AI in Software Development

The use of generative AI in programming raises complex copyright issues that developers and companies should not ignore. On the one hand, the training of many AI models is based on large amounts of data, which often also contain copyright-protected material. On the other hand, the question arises as to whether and to what extent the code generated by AI itself is protected by copyright – and who, if anyone, can be considered the author. Under German copyright law (Sections 69a et seq. UrhG), computer programs and individual program components, such as code snippets, are generally protectable if they are the result of an original intellectual creation. This protection applies to the specific form of expression of the code, while underlying ideas, algorithms, or interface concepts are explicitly excluded from copyright protection. A crucial problem with purely AI-generated code is that, since it lacks human creativity, such products do not enjoy copyright protection. This means that software created entirely by an AI coding assistant is generally considered “unprotected” in terms of copyright law (public domain). The situation may be different if humans play a significant role in the creative process, for example by providing precise specifications and exerting targeted influence. In such cases, the term “AI-assisted work” is used, and copyright protection may well apply. Particular difficulties arise when AI coding tools draw on public code libraries or packages. Several dangers lurk here: In addition to the risk of unnoticed malicious code being integrated, there is the possibility of adopting copyright-protected code that is subject to special open source licenses. Anyone who incorporates code fragments into their software without checking the respective license terms could quickly violate license requirements and be liable for damages. License models such as the GPL in particular stipulate far-reaching obligations that can significantly restrict commercial use.

How Companies Can Protect Themselves from the Risks of AI-Generated Software

Companies that use AI-supported coding tools or purchase software from external service providers should address the associated legal and technical risks early on and in detail. Conscious risk management is essential to avoid liability traps, security gaps, and license violations. If a company uses external service providers, it is important to include appropriate provisions in the contract. In particular, the labeling of AI-generated products and the transparent handling of third-party sources and licenses used should be contractually guaranteed. When acquiring licenses for third-party works, it must also be clearly defined whether and to what extent AI was used in the creation of the software. For companies that carry out software development internally, the new EU Regulation on Artificial Intelligence (AI Regulation) is coming into focus. While most of the provisions of the AI Regulation will not apply until August 2, 2026, Chapters I and II have already been in effect since February 2, 2025. Article 4 of the AI Regulation in particular obliges companies to provide adequate training for their personnel involved in the operation and use of AI systems. Employees must have a sufficient level of AI competence to be able to recognize and manage risks, for example when using coding assistants. Companies should therefore ensure that their developers are not only technically proficient, but also aware of legal pitfalls, security aspects, and licensing issues when dealing with AI-generated code. In addition, it is advisable to introduce a company-wide AI policy. This should contain clear guidelines on the use of AI tools, the testing of generated code, and the handling of open source licenses and third-party libraries. Such guidelines help to establish uniform standards and minimize the risk of wrong decisions at the operational level. Transparent handling of AI products and proactive engagement with new regulatory requirements are key to leveraging the benefits of AI in coding in a safe and legally compliant manner.

Rechtsanwalt Anton Schröder

I.  https://fin-law.de

E. info@fin-law.de

subscribe to Newsletter

This Blog Article as Podcast?

    Contact

    info@fin-law.de

    to top