Sign In
Not register? Register Now!
Pages:
7 pages/β‰ˆ1925 words
Sources:
3 Sources
Style:
APA
Subject:
IT & Computer Science
Type:
Research Paper
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 45.36
Topic:

Mutual Information for Feature Selection

Research Paper Instructions:

Hi, we are coming to the final version of this paper. I understand you did not have enough time for the last assignment, but I appreciate you did give me some high-quality work at the end. I can make the time limitation more flexible, but you need to let me know when can you finish all I need prior to the ddl I set so that I can better make the plan accordingly. For this final paper, I want at least 2000 more words (approx 5000 in total). Things to add on is about the algorithm. As you mentioned in the paper:" Data scientists use algorithms that maximize only relevant information while minimizing unnecessary or redundant information. " This is abbreviated to 'mRMR', which is one of the algorithm for MIFS. Try pick up 2 or 3 this kind of algorithms and expand them in detailed: What is the mathematical definition, pros and cons compared to other algorithms.
I attach some articles that you might want to look at.
https://www(dot)sciencedirect(dot)com/science/article/pii/S0957417415004674
http://home(dot)penglab(dot)com/papersall/docpdf/2005_TPAMI_FeaSel.pdf
https://towardsdatascience(dot)com/mrmr-explained-exactly-how-you-wished-someone-explained-to-you-9cf4ed27458b

Research Paper Sample Content Preview:

Mutual Information for Feature Selection
Name
Institutional Affiliation
Mutual Information for Feature Selection
Introduction
While the model selection is critical in learning signals from the provided data, providing the right variables or data is of utmost importance. In machine learning, the model building requires the construction of relevant features or variables through feature engineering, and the resulting data set can then be employed as a statistical input to train a model. While these models are often assumed to be sophisticated and smart algorithms, they are easily fooled by unnecessary clutter and dependencies in the data. Data scientists often make the signals to be easily identifiable by performing feature selection, which is a necessary step in data pre-processing (Huijskens, 2017). According to Zhou, Wang, and Zhu (2022), feature selection is a fundamental pre-processing step in machine learning as it selects only the crucial features by eliminating redundant or irrelevant features from the primary dataset. Battiti (1994) recognizes this pre-processing stage as a critical step where the required number of appropriate features are selected from raw data to impact on learning phase complexity and the achievable generalization performance. While using mutual information (MI) for selecting features in supervised neural net learning, Battiti (1994) notes that although it is important that information in the input vector is sufficient in determining the output class, excess input results in the burdening of the training process and thus lead to the production of neural networks with excess connection weights compared to those needed by the problem at hand. Based on an application-oriented perspective, excessive features lengthen the duration for pre-processing and recognition, regardless of a satisfactory performance in learning and recognition (Battiti, 1994). Data scientists use algorithms that maximize only relevant information while minimizing unnecessary or redundant information.
One of these techniques that have been adopted by machine learning experts and data scientists is mutual information feature selection. In this algorithm, the in-filter feature selection approach is used in assessing the relevancy of a subset of features to predict the largest variable as well as the redundancy based on other variables. Nevertheless, Beraha et al. (2019) note that the existing algorithms are often heuristic and fail to provide any guarantee that they will resolve a proposed problem. This limitation has motivated the authors to propose a novel way of observing the theoretical results that indicate conditional mutual information may occur naturally when handling the ideal regression or classification errors that are achieved by various features or a subset of features. One thing to do before selecting is to remove words that appear only infrequently in one category because they are destined to have high mutual information with one category and low mutual information with the others. Ones show that low word frequency has a great influence on mutual information. If a word is not frequent enough but mainly appears in a certain category, it will have a high level of mutual information, which will bring noise to scree...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

πŸ‘€ Other Visitors are Viewing These APA Research Paper Samples:

HIRE A WRITER FROM $11.95 / PAGE
ORDER WITH 15% DISCOUNT!