Patentability examination, which means checking whether claims of a patent application meet the requirements for being patentable, is highly reliant on experts' arduous endeavors entailing domain knowledge. Therefore, automated patentability examination would be the immediate priority, though underappreciated. In this work, being the first to cast deep-learning light on automated patentability examination, we formulate this task as a multi-label text classification problem, which is challenging due to learning cross-sectional characteristics of abstract requirements (labels) from text content replete with inventive terms. To address this problem, we fine-tune downstream multi-label classification models over pre-trained transformer variants (BERT-Base/Large, RoBERTa-Base/Large, and XLNet) in light of their state-of-the-art achievements on many tasks. On a large USPTO patent database, we assess the performance of our models and find the model outperforming others based on the metrics, namely micro-precision, micro-recall, and micro-F1.