Min, Max, Argmin & Argmax: Finding Extremes in Data
In NumPy, these functions help us find:
-
min
/max
→ The smallest or largest value in an array. -
argmin
/argmax
→ The index of the smallest or largest value.
These are super useful in Deep Learning (DL) for:
- Finding the best prediction in classification models.
- Identifying extreme values in loss functions.
- Choosing optimal weights in optimization algorithms.
🔹 NumPy Examples
1️⃣ min
& max
import numpy as np
arr = np.array([3, 1, 7, 0, 5])
print("Min:", np.min(arr)) # 0
print("Max:", np.max(arr)) # 7
👉 Finds the smallest and largest values.
2️⃣ argmin
& argmax
(Find Index)
print("Index of Min:", np.argmin(arr)) # 3 (Index of 0)
print("Index of Max:", np.argmax(arr)) # 2 (Index of 7)
👉 Returns index, not the value itself!
3️⃣ min
& max
on Multi-Dimensional Arrays
arr2D = np.array([[3, 7, 2],
[5, 1, 8]])
print("Min (Overall):", np.min(arr2D)) # 1
print("Max (Overall):", np.max(arr2D)) # 8
# Find min/max along each axis
print("Min per Column:", np.min(arr2D, axis=0)) # [3, 1, 2]
print("Max per Row:", np.max(arr2D, axis=1)) # [7, 8]
👉 axis=0
→ column-wise
👉 axis=1
→ row-wise
🔹 Why are These Important in Deep Learning?
1️⃣ Finding the Best Prediction in Classification
In a multi-class classification, we get probabilities for each class. We use argmax
to find the class with the highest probability.
softmax_outputs = np.array([[0.1, 0.3, 0.6],
[0.7, 0.2, 0.1]])
predictions = np.argmax(softmax_outputs, axis=1)
print("Predicted Classes:", predictions) # [2, 0]
✅ argmax
helps pick the most probable class.
2️⃣ Finding the Best and Worst Loss Values
When training deep learning models, we track loss values. min
and argmin
help us find the best (lowest) loss.
loss_values = np.array([0.9, 0.5, 0.2, 0.6])
best_loss = np.min(loss_values) # Smallest loss
best_epoch = np.argmin(loss_values) # Epoch with lowest loss
print("Best Loss:", best_loss) # 0.2
print("Best Epoch:", best_epoch) # 2
✅ Useful for early stopping and model evaluation.
3️⃣ Finding Maximum Activation in a Neural Network
In a convolutional neural network (CNN), we apply ReLU activation and find the maximum activation in a feature map.
feature_map = np.array([[0.1, 0.5, 0.2],
[0.7, 0.3, 0.9]])
max_activation = np.max(feature_map) # 0.9
max_location = np.argmax(feature_map) # 5 (Index in flattened array)
print("Max Activation:", max_activation)
print("Max Activation Index:", max_location)
✅ Helps find the most activated neuron.
Conclusion 🚀
Function | What It Does | Example Use in Deep Learning |
---|---|---|
min |
Finds smallest value | Identify lowest loss |
max |
Finds largest value | Find maximum activation |
argmin |
Finds index of smallest value | Find best epoch in training |
argmax |
Finds index of largest value | Pick most probable class in classification |