Codex is a large-scale language model (LLM) trained on a diverse codebase that exceeds previous state-of-the-art in its ability to synthesize and generate code. Although Codex offers many benefits, any model that could generate code at such scale has significant limitations, coordination issues, potential for abuse, and its own destabilizing effects. It has the potential to increase the rate of advancement in technological fields that can be exploited or exploited. potential. However, such safety effects are still unknown or subject to investigation. This paper uses a hazard analysis frame built with OpenAI to uncover the hazards or safety risks that the deployment of a Codex-like model may impose technically, socially, politically, and economically. I will explain the outline of the work. The analysis is informed by a new evaluation framework that determines the capabilities of advanced code generation technologies for the complexity and expressiveness of specification prompts, and their ability to understand and execute them relative to human capabilities.