Importance Measures (IMs) are valuable tools that have been used for identifying the most important component with respect to the overall system reliability or to characterize the importance of element failures, human errors, and common cause failures, among others. However, different importance measures based on different definitions may lead to apparently different importance rankings of the components within a system. In this paper we assess the statistical agreement among the IMs rankings on the basis of Kendall's coefficient of concordance W. The coefficient W provides a statistical way to formally assess the similarity among IMs ranks. If the test statistic W is 1, then all the IMs are complete concordant and each IM ranks the components in a similar way and a single IM can be used. If W is 0, then there is no overall trend of agreement among the IMs, and their ranks may be regarded as essentially different. Intermediate values of W indicate a greater or lesser degree of unanimity among the various IMs. Numerical examples illustrate the use of the Kendall's coefficient.