In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is O (root alpha T lnK), where K is the number of actions, alpha is the independence number of the graph, and T is the time horizon. The root lnK factor is known to be necessary when alpha = 1 (the experts case). On the other hand, when alpha = K (the bandits case), the minimax rate is known to be T (root KT ), and a lower bound O (root alpha T ) is known to hold for any a. Our improved upper bound O root alpha T(1 + ln(K/alpha))) holds for any a and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with q-Tsallis entropy for a carefully chosen value of q is an element of [1/2, 1) that varies with alpha. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to timevarying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved Omega ( root alpha T(lnK)/(ln alpha)) lower bound for all alpha > 1, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as alpha < K.