Abstract:
The quality of software is enormously affected by the faults associated with it. Detection of faults at a proper stage in software development is a challenging task and plays a vital role in the quality of the software. Machine learning is now a days a commonly used technique for fault detection and prediction. However, the effectiveness of the fault detection mechanism is impacted by the number of attributes presented in the dataset. This paper thoroughly gives the importance to compare between different machine learning approaches and by observing their performances we can conclude which models perform better to detect fault in the selected software modules and investigates the effect of various feature selection techniques on software fault classification by using NASA’s some benchmark publicly available datasets. Various metrics are used to analyze the performance of the feature selection and classification techniques. The experiment discovers that some particular classifiers can detect the presence of the faults more effectively and by selecting the best features and solving the class imbalance problem can ensure better quality of the software.
Description:
This thesis submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering of East West University, Dhaka, Bangladesh.