When you're designing a program, how do you choose appropriate data structures and algorithms? How do you know when the proposed design is likely to be "too slow," without benchmarking the finished code? How can you avoid introducing inefficiencies into the code as you write it, without burning yourself out on "premature optimization"? Why do people always say the inner loop is the most important? How is it that the 10x speedup you committed yesterday was completely wiped out by a mere 2x increase in site traffic today?
In this session, we'll explore the notion of algorithmic complexity, especially as it relates to the data structures and algorithms provided by the C++ standard library, such as std::sort, std::find, and std::binary_search. With just a bit of informal math, we'll define "big-O notation" and demonstrate the differences between complexity classes such as O(1), O(lg n), and O(n). We'll show how to determine the big-O complexity of your own functions by counting loops (including the loops hidden inside STL algorithms and overloaded operators). Then we'll show how we can use limits — or a bit of geometric intuition — to determine that certain algorithms run in linear time despite their complex inner structure, and explain what we mean when we say that vector::push_back runs in "amortized constant time." Finally, we'll discuss ways to trade off space against time, or setup time against query time, and discuss some practical considerations such as constant factors and physical implementation limits.
ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY:
https://cppcon.digital-medium.co.uk/