MLPClassifier

 Great question! Let’s carefully analyze this one.


Problem

You’re training an MLPClassifier (a neural network) on MNIST.


Option Analysis

  1. Convert images to grayscale.
    ❌ Not needed — MNIST is already grayscale.

  2. Scaling the data using Min-Max Scaling.
    ✅ Correct — Neural networks (like MLP) work best when inputs are normalized/scaled (e.g., in [0,1] or mean 0, variance 1). This is essential for faster convergence and better performance.

  3. Apply PCA to reduce feature dimensions.
    ⚠️ Not essential — PCA can help with speed but is not required for performance; MNIST features are manageable (784).

  4. One-Hot encode the target labels (digits 0–9).
    ⚠️ If you use MLPClassifier from Scikit-Learn, it does not require one-hot encoding (it accepts integer class labels directly). So this is not essential.


✅ Correct Answer:

Scaling the data using Min-Max Scaling


👉 Would you like me to also explain why scaling is more important than PCA for neural nets like MLP?

Comments

Popular posts from this blog

🌳 Understanding Maximum Leaf Nodes in Decision Trees (Scikit-Learn)

Understanding Data Leakage in Machine Learning: Causes, Examples, and Prevention

⚖️ Logistic Regression in Sklearn: Handling Class Imbalance and Regularization