新闻与深度文章
加载中
新闻报道 | VentureBeat
When AI reasoning goes wrong: Microsoft Research shows more tokens can mean more problems
Large language models (LLMs) are increasingly capable of complex reasoning through “inference-time scaling,” a set of techniques that allocate more computational resources during inference to generate answers. However, a new study from Microsoft Research reveals that the effectiveness of these…