A new self-learning method is proposed to detect sound spectral components in cochlear nerve firing patterns. The row-wise autocorrelation images from cochlear nerve firing patterns are employed to determine frequency specific autocorrelation masks. Afterwards, these masks are cross-correlated with the cochlear nerve autocorrelation pattern of unknown sound to detect spectral component amplitudes. The method is demonstrated within three experimental setups by using: natural hearing model, cochlear implant model with ACE strategy, and real MED-EL Opus2 cochlear implant processor interfaced with simple current spread and nerve excitation model. The proposed method is agnostic toward the cochlear nerve stimulation strategy, and mimics the brain ability to learn to interpret any type of new stimulus. Thus, it provides an objective method of predicting pitch perception quality of arbitrary cochlear implant types and stimulation strategies. Also, it provides a solid foundation to implement new auralization methods of sounds perceived by cochlear implant users.