Skip to content

Commit f85bf28

Browse files
authored
Add MLP activation function comparison script
Added a new script (mlp_activation_comparison.py) that demonstrates the effect of different activation functions ('relu', 'tanh', 'logistic') on a simple dataset using scikit-learn's MLPClassifier. This helps visualize and understand how activation choices influence model performance.
1 parent beb3cfd commit f85bf28

1 file changed

Lines changed: 23 additions & 0 deletions

File tree

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
import matplotlib.pyplot as plt
2+
from sklearn.neural_network import MLPClassifier
3+
from sklearn.datasets import make_moons
4+
from sklearn.model_selection import train_test_split
5+
from sklearn.metrics import accuracy_score
6+
7+
X, y = make_moons(n_samples=500, noise=0.2, random_state=42)
8+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
9+
10+
activations = ["logistic", "tanh", "relu"]
11+
results = {}
12+
13+
for act in activations:
14+
clf = MLPClassifier(hidden_layer_sizes=(10, 5), activation=act, solver="adam", max_iter=1000, random_state=42)
15+
clf.fit(X_train, y_train)
16+
y_pred = clf.predict(X_test)
17+
acc = accuracy_score(y_test, y_pred)
18+
results[act] = acc
19+
20+
plt.bar(results.keys(), results.values())
21+
plt.title("Activation Function Comparison")
22+
plt.ylabel("Accuracy")
23+
plt.show()

0 commit comments

Comments
 (0)