Students' textual feedback holds useful information about their learning experience, it can include information about teaching methods, assessment design, facilities, and other aspects of teaching. This can form a key point for educators and decision makers to help them in advancing their systems. In this paper, we proposed a data mining framework for analysing end of unit general textual feedback using four machine learning algorithms, support vector machines, decision tree, random forest, and naive bays. We filtered the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data subset, We ran the above algorithms on the whole data set, and on the new data subsets. We also, adopted a semi automatic approach to check the classification accuracy of assessment related instances under the whole data set model. We found that the accuracy of general feedback data set models were higher than the accuracy of the assessment related models and nearly the same value of the non-assessment related modeles. The accuracy of assessment related models were approximated to the accuracy of the assessment related instances under the full data set models.