{"ID":440307,"CreatedAt":"2026-03-04T20:59:09Z","UpdatedAt":"2026-03-04T20:59:09Z","DeletedAt":null,"paper_url":"https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners","arxiv_id":"2205.11916","title":"Large Language Models are Zero-Shot Reasoners","abstract":"Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding \"Let's think step by step\" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.","short_abstract":"Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.","url_abs":"https://arxiv.org/abs/2205.11916v4","url_pdf":"https://arxiv.org/pdf/2205.11916v4.pdf","authors":"[\"Takeshi Kojima\", \"Shixiang Shane Gu\", \"Machel Reid\", \"Yutaka Matsuo\", \"Yusuke Iwasawa\"]","published":"2022-05-24T00:00:00Z","tasks":"[\"Arithmetic Reasoning\", \"Common Sense Reasoning\", \"Date Understanding\", \"Few-Shot Learning\", \"GSM8K\", \"Logical Reasoning\", \"Math Word Problem Solving\"]","methods":"[\"PaLM\"]","has_code":false,"code_links":[{"ID":334184,"CreatedAt":"2026-03-04T21:00:12Z","UpdatedAt":"2026-03-04T21:00:12Z","DeletedAt":null,"paper_id":440307,"paper_url":"https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners","paper_title":"Large Language Models are Zero-Shot Reasoners","repo_url":"https://github.com/zongqianwu/st-cot","is_official":false,"mentioned_in_paper":false,"mentioned_in_github":true,"framework":"pytorch","github_stars":0},{"ID":410979,"CreatedAt":"2026-03-04T21:00:12Z","UpdatedAt":"2026-03-04T21:00:12Z","DeletedAt":null,"paper_id":440307,"paper_url":"https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners","paper_title":"Large Language Models are Zero-Shot Reasoners","repo_url":"https://github.com/skytliang/multi-agents-debate","is_official":false,"mentioned_in_paper":false,"mentioned_in_github":true,"framework":"none","github_stars":0},{"ID":413891,"CreatedAt":"2026-03-04T21:00:12Z","UpdatedAt":"2026-03-04T21:00:12Z","DeletedAt":null,"paper_id":440307,"paper_url":"https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners","paper_title":"Large Language Models are Zero-Shot Reasoners","repo_url":"https://github.com/kojima-takeshi188/zero_shot_cot","is_official":true,"mentioned_in_paper":true,"mentioned_in_github":true,"framework":"pytorch","github_stars":0},{"ID":420274,"CreatedAt":"2026-03-04T21:00:12Z","UpdatedAt":"2026-03-04T21:00:12Z","DeletedAt":null,"paper_id":440307,"paper_url":"https://paperswithcode.com/paper/large-language-models-are-zero-shot-reasoners","paper_title":"Large Language Models are Zero-Shot Reasoners","repo_url":"https://github.com/nicolay-r/reasoning-for-sentiment-analysis-framework","is_official":false,"mentioned_in_paper":false,"mentioned_in_github":true,"framework":"pytorch","github_stars":0}]}
