Research on the Translation of Chinese Complex Sentences With Large Language Model ChatGPT-4o —Taking English Translation of Chinese Flowing Sentences as an Example
DOI:
https://doi.org/10.17507/tpls.1505.19Keywords:
LLMs, ChatGPT-4o, Chinese flowing sentences, translationAbstract
Large language models (LLMs) have demonstrated outstanding performance in the field of machine translation. Many scholars have explored LLM-based translation from various perspectives, primarily focusing on the translation of routine, simple, and everyday texts. However, when it comes to the examination of specific sentence structures, especially complex sentences, the research is still somewhat insufficient. Therefore, this study attempts to investigate the translation of Chinese complex sentences by ChatGPT-4o, focusing on the translation of Chinese flowing sentences, a special sentence pattern in Chinese. The study examines how ChatGPT-4o handles the subject identification and translation of zero-subject sentence segments within Chinese flowing sentences, as well as the translation of the frozen sentence segments in Chinese flowing sentences. The research results indicate that: for the ten categories of subject identification in zero-subject sentence segments, ChatGPT-4o can generally identify them all, but occasionally makes ambiguous identification; ChatGPT-4o employs two translation strategies—one is to add subjects (mostly in the form of pronouns) to translate the zero-subject sentence segments into independent clauses, the other is to translate the zero-subject sentence segments into attributive structures; the accuracy of ChatGPT-4o’s translation of frozen sentence segments is inconsistent; ChatGPT-4o may adopt the literal meaning or metaphorical meaning of the frozen sentence segment, determining the form of the frozen sentence segment based on the overall structure of the flowing sentence.
References
Chen, P., Guo, Z., Haddow, B., & Heafield, K. (2023). Iterative translation refinement with large language models. arXiv preprint arXiv:2306.03856.
Fang, M. Z. (2003). Yi xue ci dian [The Dictionary of Translation Studies]. Shanghai Foreign Language Education Press.
Hendy, A., Abdelrehim, M., Sharaf, A., Raunak, V., Gabr, M., Matsushita, H., ... & Awadalla, H. H. (2023). How good are gpt models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210.
Hu, D. Q. (1999). Liu shui ju de li jie yu ying yi [The understanding and translation of run-on sentences]. Wai yu yu wai yu jiao xue, 118(3), 46-49.
Jiao, W., Wang, W., Huang, J. T., Wang, X., Shi, S., & Tu, Z. (2023). Is ChatGPT a good translator? Yes with GPT-4 as the engine. arXiv preprint arXiv:2301.08745.
Kocmi, T., & Federmann, C. (2023). Large language models are state-of-the-art evaluators of translation quality. arXiv preprint arXiv:2302.14520.
Lu, Q., Qiu, B., Ding, L., Zhang, K., Kocmi, T., & Tao, D. (2023). Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models. arXiv preprint arXiv:2303.13809.
Ma, X., Fang, G., & Wang, X. (2023). Llm-pruner: On the structural pruning of large language models. Advances in neural information processing systems, 36, 21702-21720.
Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., & Gao, J. (2024). Large language models: A survey. arXiv preprint arXiv:2402.06196.
Moorkens, J., Way, A., & Lankford, S. (2024). Automating Translation. Routldege.
Nagi, K. A., Alzain, E., & Naji, E. (2024). Informed Prompts and Improving ChatGPT English to Arabic Translation. Al-Andalus Journal for Humanities & Social Sciences, 11(98), 56-78.
Nida, E. A. (1983). Translate Meaning. English Language Institute.
Pavlick, E. (2022). Semantic structure in deep learning. Annual Review of Linguistics, 8(1), 447-471.
Quirk, R., & Crystal, D. (2010). A comprehensive grammar of the English language. Pearson Education India.
Raunak, V., Sharaf, A., Wang, Y., Awadallah, H.H., & Menezes, A. (2023). Leveraging GPT-4 for automatic translation post-editing. arXiv preprint arXiv:2305.14878.
Renkema, J. (2004). Introduction to Discourse Studies. John Benjamins.
Santos, H., & Khalil, A. (2024). Unleashing the Potential of LLM in ML: Techniques for Fine-Tuning, Adaptation, and Practical Deployment with ChatGPT. Baltic Multidisciplinary journal, 2(2), 179-184.
Sun, K. (2013). Hua ti lian shi jiao xia de han ying pian zhang zu zhi mo shi dui bi yan jiu [A comparative study of Chinese and English text organization patterns from the perspective of topic chain]. Jie fang jun wai guo yu xue yuan xue bao, 36(3), 12-18+51+127.
Xu, J. (2014). Cong fan yi chu fa: fan yi yu fan yi yan jiu [A Point of Departure: Translation and Translation Studies]. Fudan University Press.
Zhao, C. Y., & Wang, W. B. (2023). Han yu liu shui ju ying yi de jie gou zhuan huan ce lue: ying han shi kong xing qiang ruo cha yi shi jiao [Structural Transformation Strategies for Translating Chinese Flowing sentences into English: from the Perspective of the Differences in Spatiotemporal Intensity between English and Chinese]. Wai yu jiao xue, 44(4), 15-22.