Convex optimization highlights ICCV program session
By John Kaiser, Research News
Convex optimization, prized for its efficiency and utility in solving small and medium-size problems across multiple disciplines, could soon be extended to handle much larger problems and tasks, such as those found in image and vision processing.
Explaining how and why that might happen, Prof. Stephen Boyd of Stanford University gave the plenary address Sunday at the biennial International Conference on Computer Vision in Santiago Chile.
To date, convex optimization used in many applications like AI, automatic control, and automated trading, starts to falter when the problem size exceeds 10,000 variables. That’s sufficient to control an engine or inform hedge fund portfolios, but not nearly enough to handle the complex image recognition calculations required for autonomous vehicle navigation through busy urban areas.
Boyd is a leading authority on convex optimization, a century-old branch of mathematics that was largely theoretical until the early 1990s. That’s when faster computing speeds, and advances in algorithms for solving them, made it practical to use for applications like predicting peak power usage. Fast forward to today and convex optimization is routinely used across a variety of disciplines not only in computer science but also genetics, bioengineering, economics, behavioral sciences and various other fields.
In the plenary talk and accompanying paper, Convex Optimization with Abstract Linear Operators, Boyd describes “recent progress toward the goal of extending (domain specific languages or DSLs) to handle large-scale problems that involve linear operators given as abstract operators with fast transforms, such as those arising in image processing and vision, medical imaging, and other application areas. This involves re-thinking the entire stack, from the high-level DSL design down to the low level solvers.”
The research was conducted in collaboration with Steven Diamond, PhD candidate at Stanford.
Although faster and cheaper CPUs will always be significant in charting the rise of any newly adopted technology, the use of DSLs may prompt wider adoption by enabling non-experts to articulate exactly what they want using a more natural language.
“Domain specific languages allow a broad variety of people to use (convex optimization) tools without knowing the details of how they work,” Boyd said.
It’s the prospect of solving very difficult computational problems with just several lines of code that has far reaching, “game changing” implications.
Eventual goal: “People who do images and work with 1 million variables can just write a 5-line Python script,” Boyd said.
Domain specific languages for convex optimization are easy to use, since they are “simple declarative languages,” where programmers declare what they want, but not how to achieve it. This is unlike the usual procedural languages where the programmers state what to do when something occurs.
“In most quantitative hedge funds you don’t have hardcoded logic that says what to sell or buy depending on how the market moves,” Boyd explained. “What they do instead is solve an optimization problem that takes everything into account and then decides what to trade.”
Boyd’s PhD-level course on convex optimization is among the largest graduate courses at Stanford with more than 330 students (compared to a median graduate class size of 12). For access to open source code, papers, and related materials, see the Stephen P. Boyd site at Stanford University.
Sponsored by the Institute of Electrical and Electronics Engineers (IEEE), ICCV runs from Dec. 11-18, 2015.