MLPRegressor on the California housing dataset.

 Let’s carefully evaluate the options for training an MLPRegressor on the California housing dataset.


Options Analysis

  1. “Increasing the number of hidden layers always improves regression accuracy.”
    ❌ Incorrect.


  1. “Using the ReLU activation function in hidden layers is a good choice for MLP regression.”
    ✅ Correct.

  • ReLU is a common, effective choice for hidden layers.

  • It helps avoid vanishing gradient problems and speeds up convergence.

  • In MLPRegressor, the default is 'relu'.


  1. “The output layer should use a softmax activation to predict continuous house prices.”
    ❌ Incorrect.

  • Softmax is for classification problems (outputs probabilities over classes).

  • For regression, the output layer is typically linear (identity activation).


  1. “MLPRegressor does not require feature scaling since neural networks automatically normalize input data.”
    ❌ Incorrect.

  • Neural networks do not automatically normalize input data.

  • Feature scaling (e.g., StandardScaler/MinMaxScaler) is essential for stable training.


Correct Answer

Using the ReLU activation function in hidden layers is a good choice for MLP regression.


Would you like me to also explain what the output activation is by default in MLPRegressor (and why)?

Comments

Popular posts from this blog

Understanding Data Leakage in Machine Learning: Causes, Examples, and Prevention

🌳 Understanding Maximum Leaf Nodes in Decision Trees (Scikit-Learn)

Linear Regression with and without Intercept: Explained Simply