Nutrient deficiency during growth is a major cause of reduced yield for wide variety of plants. Yield can be improved if nutrient levels are regularly checked, and maintained using proper fertilization & pest control. In order to perform this task, a wide variety of image processing systems models have been developed, which aim at analysing plant imagery to look for visually apparent nutrient deficient patterns. But these systems are able to identify nutrient deficiencies after the plant is affected by them. In contrast to this, intrusive systems are developed which perform chemical analysis on leaf samples in order to estimate these deficiencies. Performing such chemical analysis limits their testing capabilities because random leaf samples are taken for evaluation, thereby limiting their accuracy. In order to remove these drawbacks, this text proposes a nonintrusive multifrequency visible light analysis framework, that can be used for identification of multiple nutrient deficiencies for a wide variety of plants. In order to map spatial & temporal light properties with nutrient shortages with high accuracy, this framework leverages extensive learning. In order to authenticate the readings, the framework also processes nonintrusive microscopic photos of the same plant leaf using a deep learning technique. This is accomplished by an incremental learning technique that combines the high-efficiency classification capabilities of the VGGNet19 and XceptionNet models. The model can be used for proactive nutrient monitoring and to conduct corrective actions to improve yield quality because it is nonintrusive. The proposed model was evaluated for potassium, nitrogen, copper, zinc, and phosphorus deficit in orange, cotton, apple, banana, mango, litchi, henna, gooseberry, and okra leaf. It was observed that the proposed approach is able to achieve 99.7% accuracy for detection of potassium, 97.2% for nitrogen, 98.5% for copper, 96.8% for zinc, and 95.9% for detection of phosphorus. This accuracy was evaluated by finding average performance of the model for different leaf types, and was compared with various state-of-the-art models. It was observed that the proposed approach has 8% better accuracy than previously described models, and showcases better precision, recall, and fMeasure performance. Due to intensive learning, the model requires large initial delay of training, but evaluation speed is at par with recent state-of-the-art models.