A convex objective function will have a single global minimizer, whereas nonconvex functions may have additional local minima. Quasi-Newton methods only use local information in their updates, so they may well converge to a non-global minimum, depending upon starting values. A possible solution is to try a number of starting values. This is likely to work well if the nonconvexity problem is not too severe. When there are many local minima, a search-type algorithm may become more efficient, since the problem of local minima is dealt with automatically and doesn't require the analysts' intervention. After all, who's time is more important, yours, or your computer's?