Andrew Grimes's picture

Andrew Grimes

emcon33

AI & ML interests

Background in AI (Lisp programming anyone?) and personal projects with NVIDIA Jetbot, OpenCV, NLP, Generative AI with LLM, AWS hosted appplicaiton space, Watsonx, SageMaker, OpenShift AI.

Recent Activity

updated a Space about 2 months ago
RedHatAI/README
View all activity

Organizations

Red Hat's profile picture

emcon33's activity

updated a Space about 2 months ago
reacted to akhaliq's post with ๐Ÿ‘ 10 months ago
view post
Post
Chain-of-Thought Reasoning Without Prompting

paper page: Chain-of-Thought Reasoning Without Prompting (2402.10200)

In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while effective, often involve manually intensive prompt engineering. Our study takes a novel approach by asking: Can LLMs reason effectively without prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the decoding process. Rather than conventional greedy decoding, we investigate the top-k alternative tokens, uncovering that CoT paths are frequently inherent in these sequences. This approach not only bypasses the confounders of prompting but also allows us to assess the LLMs' intrinsic reasoning abilities. Moreover, we observe that the presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer. This confidence metric effectively differentiates between CoT and non-CoT paths. Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding substantially outperforms the standard greedy decoding.
updated a model about 1 year ago