CCoT is a prompt-engineering technique aimed at reducing LLM response verbosity & inference time. It reduces response length by 48.70% for multiple-choice Q&A, with unchanged problem-solving performance. For math problems, it incurs a 27.69% performance penalty, but leads to an average token cost reduction of 22.67%.