The mapping of brain areas involved in the representation of living vs. non-living objects has been matter for debate. Electroencephalography (EEG) and magnetoencephalography (MEG) recordings combined with advanced machine learning techniques have been useful for this purpose. This study conducted analysis on features extracted from MEG recordings of two subjects performing a language task. Mean accuracies of 57.68% for visual task (chance level 50%) and 52.52% for auditory task (chance level 50%) on decoding living vs. non-living category and 49.39% on decoding auditory living vs. auditory non-living vs. visual living vs. visual non-living category (chance level 25%) were obtained.