In this paper, we pursue the insight that large-scale language models (LLMs) trained to generate code can significantly improve the effectiveness of mutation operators applied to genetic programming (GP) programs. I am. Such LLMs benefit from training data that includes continuous changes and modifications, allowing them to approximate the changes that a human would make. To highlight the far-reaching implications of such evolution with large-scale models (ELMs), the main experiment combines ELMs and MAP-Elites to create a functional example of a Python program that outputs a walking robot operating in the Sodarace domain. generate hundreds of thousands of pieces. I had never seen it before in training. These examples will help you bootstrap train a new conditional language model that can output pedestrians appropriate for a given terrain. The ability to bootstrap new models that can output artifacts appropriate to a given context in areas where there was previously zero training data has implications for open-endedness, deep learning, and reinforcement learning. Here, we explore these implications in depth in hopes of stimulating new directions in research opened up by ELM.