Abstract: This paper investigates the readability and accessibility of Python code automatically generated by large language models. We evaluate two open-source instruction-tuned models, ...
We evaluate DeepCode on the PaperBench benchmark (released by OpenAI), a rigorous testbed requiring AI agents to independently reproduce 20 ICML 2024 papers from scratch. The benchmark comprises 8,316 ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results