Language models (LLMs) use reasoning to produce answers by breaking down multistep problems into intermediate steps using methods like chain-of-thought prompting. LLMs can accurately solve complex reasoning problems when provided with the problem and solving method. Reasoning also helps LLMs avoid hallucinations by understanding connections in data.