Thoughts on Large Language Models and Low Context Window Programming Language (LCWPL)
Machine-translated from Chinese. · Read original
Recent Experience with AI-Assisted Programming
Recently, I’ve been using AI to assist with programming, and my biggest takeaway is that the context window is too small. Large Language Models (LLMs) are very powerful, but because software development requires a large amount of code to be output and memorized, I often find that it forgets what it was doing. If you ask it to only output the code for a single file (a few hundred lines), most current LLMs are capable of handling it.
This explains why many people have polarized opinions on the current AI programming capabilities - many people’s work involves secondary development and optimization of existing systems, which have a large amount of historical code. Current LLMs struggle to read and memorize this code completely. This is similar to humans, as we don’t have anyone who can accurately memorize every line of code in an operating system kernel.
Predictions for the Future of AI-Assisted Software Development
Therefore, I predict that in the near future, we will see two branches of AI-assisted software development:
- AI-agents: in this branch, a large-scale software development project will be broken down into many small functional modules, and each module will be written and tested by an AI-agent that has received specialized training (only trained on relevant code, such as only responsible for writing backend code). This is likely to be the consensus among everyone. The biggest problem with AI-agent systems is the high resource overhead, and this idea is not innovative, it’s just using AI to replace humans.
- We will see a new programming language designed with low context overhead (Low Context Window Programming Language, LCWPL), which is designed to be suitable for current AI programming and also ensures basic syntax that is easy for humans to maintain (at this stage, AI maintenance is still early). Existing languages like Java and C++, which have long class names and complex programming paradigms, will be gradually abandoned (or, with human assistance, automatically translated from Java/C++ to LCWPL and replaced). Systems written in these languages will also be replaced by this new programming language (with this language, it’s not difficult to use AI to rewrite an operating system). Python and other dynamic languages may still survive in academic circles for a while due to their rich libraries, but will eventually disappear like Fortran.
My Personal Experience with Go Language
Based on my personal experience, the current Go language is very suitable for LLMs, mainly because Go’s syntax is simple, and the way to solve similar problems is uniform, unlike some languages that have multiple styles and a large number of peculiar libraries. This consistency makes it easier for the model to learn the single correct way, reducing context confusion. Similarly, some older, stricter languages (such as Ada, Eiffel) may be re-popularized in the future due to their standardization, used as the foundation for AI programming.
Characteristics of Future Programming Languages
In summary, we can predict that new programming languages will have the following characteristics:
- They will definitely be compiled languages, rather than dynamic interpreted languages like Python. Because dynamic interpreted languages have too high a debug complexity for AI (the compilation process will automatically perform many checks, and compiled languages will reduce the number of dynamic scenarios that AI needs to track).
- They will definitely have simple syntax, may have syntactic sugar, but will not have many complex programming paradigms. Overly complex programming paradigms will lead to AI understanding confusion. Simplicity is the best for AI.
- They will definitely support feature-oriented integration, which means that components are developed (written by AI) according to the functions set by humans, and are developed recursively one by one. This also allows AI to thoroughly test the implementation of new functions before incrementally integrating them into the system. In other words, AI will first develop the “skeleton” code (tree structure), and then develop small functional components and hang them on the tree. This programming development basically does not require “scaffolding” (except for the underlying program library classes).
- These programming languages will have mechanisms similar to Rust’s borrowing at the syntax level, which can explicitly tell AI which files and functions are strongly associated and which are weakly associated. This way, during development or subsequent modification, AI only needs to read the associated files and functions, rather than the entire project. This will significantly reduce the overall complexity of system development. This is similar to human development, where humans focus on the functional modules they are developing and a limited number of associated modules, rather than trying to load the entire system into their brain.
Old languages may not disappear immediately, but from the perspective of maintainability and AI compatibility, the emergence of new languages is a trend.
还没有人留言,在下面说两句吧。