Abstract:With the continuous development of genome editing technologies, from early zinc finger nucleases (ZFNs) and TALENs to CRISPR-Cas9-based systems and their derivatives, including base editors, prime editors, and CRISPR-associated transposases(CAST), the precision, programmability, and functional diversity of genome editing have greatly improved. However, current tools still face challenges in stable editing efficiency, off-target control, adaptability to targets and cellular contexts, and predictability of outcomes. Traditional trial-and-error optimization is increasingly inadequate for engineering and large-scale applications. Recently, the rapid development of large language models, protein language models, and deep learning provides a new computational paradigm for rational design and optimization of genome editing systems, showing potential in food science and synthetic biology. These models can learn intrinsic patterns from large-scale sequence, structure, and functional data, predict editing efficiency and off-target risks, optimize guide RNAs and editors, and assist in developing new genome editing tools, improving controllability, safety, and research efficiency. This review summarizes the applications of large models in CRISPR systems, base editors, prime editors, and CAST systems, and highlights their applications in food science, aiming to provide theoretical reference and methodological support for intelligent design and engineering of genome editing tools.