MLPClassifier
Great question! Let’s carefully analyze this one.
Problem
You’re training an MLPClassifier (a neural network) on MNIST.
-
MNIST images are already grayscale (28×28 → flattened to 784 features).
-
Neural networks are very sensitive to feature scaling.
Option Analysis
-
Convert images to grayscale.
❌ Not needed — MNIST is already grayscale. -
Scaling the data using Min-Max Scaling.
✅ Correct — Neural networks (like MLP) work best when inputs are normalized/scaled (e.g., in [0,1] or mean 0, variance 1). This is essential for faster convergence and better performance. -
Apply PCA to reduce feature dimensions.
⚠️ Not essential — PCA can help with speed but is not required for performance; MNIST features are manageable (784). -
One-Hot encode the target labels (digits 0–9).
⚠️ If you useMLPClassifierfrom Scikit-Learn, it does not require one-hot encoding (it accepts integer class labels directly). So this is not essential.
✅ Correct Answer:
Scaling the data using Min-Max Scaling
👉 Would you like me to also explain why scaling is more important than PCA for neural nets like MLP?
Comments
Post a Comment