The Next Horizon of System Intelligence

SIGOPS Blog Series

Generative AI, as represented by Large Language Models (LLMs), has been making unprecedented impacts on many problem domains such as code generation and bug fixing, and has reshaped the research landscape across many fields in computing and data science. However, its role in tackling complex, real-world system challenges seems to remain controversial. Anecdotally, some doubt LLMs’ capability to manage system intricacies, while others express concerns over their safety and interpretability.

This controversy raises hard but urgent questions, which remain unanswered and even subject to drastically different opinions:

  1. Can AI models go beyond coding and bug fixing, towards the design and implementation of innovative and complex systems?
  2. What are the inherent limitations of existing AI models, and what are the best practices and tools available for systems researchers?
  3. How can we effectively “marry” AI research and systems research? Can this partnership ultimately prepare the SIGOPS community for “grand challenges”?

These questions actually set the stage for a new paradigm of computing systems. We envision:

The fundamental machinery of computing systems is no longer bounded by human ingenuity. Instead, it is replaced with self-evolving artifacts, enabled by system intelligence instantiating high-level and declarative goals, while obeying the safety and security principles that protect modern systems against adversaries.

With this blog series, we aim to spark active discussions in our community. We welcome scientific debates and principled thinking on the aforementioned questions. More generally, we encourage principled thinking on how system research could be fostered, scaled, and accelerated in the era of generative AI. Finally, subsequent episodes will also share practices and lessons through the community’s real-world stories and experiences.