Deep Learning Module
Neural network architecture design, multi-framework support, and advanced training visualization with real-time monitoring.
Overview
The Deep Learning module provides AI-powered neural network architecture design, multi-framework support (TensorFlow, PyTorch), and advanced training visualization. It supports CNN, RNN, LSTM, and custom architectures.
NeuralNetworkDesigner
class NeuralNetworkDesigner()
AI-powered neural network architecture design.
Key Features
- AI-powered architecture recommendations
- Multi-layer network design
- CNN, RNN, LSTM support
- Complexity analysis
- Code generation
Methods
get_ai_recommendations(input_shape: Tuple, problem_type: str, num_classes: int = None) -> Dict[str, Any]
Get AI-powered architecture recommendations.
Parameters:
input_shape: Input shape tupleproblem_type: 'classification', 'regression', 'clustering'num_classes: Number of classes (for classification)
Returns: Dictionary containing architecture recommendations
create_network(input_shape: Tuple, layers: List[Dict[str, Any]], problem_type: str, num_classes: int = None) -> Any
Create neural network from layer configuration.
Parameters:
input_shape: Input shape tuplelayers: List of layer configurationsproblem_type: Problem typenum_classes: Number of classes
Returns: Compiled neural network model
DeepLearningTrainer
class DeepLearningTrainer(model: Any, problem_type: str, framework: str = 'tensorflow')
Multi-framework deep learning trainer.
Key Features
- TensorFlow and PyTorch support
- Training callbacks
- Real-time monitoring
- History tracking
- Performance evaluation
Methods
train(X: np.ndarray, y: np.ndarray, validation_data: Tuple = None, epochs: int = 100, batch_size: int = 32) -> Dict[str, List]
Train the model.
Parameters:
X: Training featuresy: Training targetsvalidation_data: Validation data tupleepochs: Number of epochsbatch_size: Batch size
Returns: Training history dictionary
evaluate(X: np.ndarray, y: np.ndarray) -> Dict[str, float]
Evaluate model performance.
Parameters:
X: Test featuresy: Test targets
Returns: Evaluation metrics
NetworkVisualizer
class NetworkVisualizer()
Neural network visualization tools.
Key Features
- Architecture visualization
- Training progress plots
- Feature importance
- Model interpretability
- Interactive dashboards
Methods
plot_model_architecture(model: Any, title: str = None, save_path: str = None)
Plot model architecture.
Parameters:
model: Neural network modeltitle: Plot titlesave_path: Path to save plot
plot_training_history(history: Dict[str, List], title: str = None, save_path: str = None)
Plot training history curves.
Parameters:
history: Training historytitle: Plot titlesave_path: Path to save plot
Examples
Classification Network
python
from ai_ml_framework.deep_learning import NeuralNetworkDesigner, DeepLearningTrainer
# Get AI recommendations
designer = NeuralNetworkDesigner()
recommendations = designer.get_ai_recommendations(
input_shape=(20,),
problem_type='classification',
num_classes=2
)
print("Recommended architecture:")
for layer in recommendations['layers']:
print(f" {layer['type']}: {layer.get('units', layer.get('filters', 'N/A'))}")
# Create model
model = designer.create_network(
input_shape=(20,),
layers=recommendations['layers'],
problem_type='classification',
num_classes=2
)
# Train model
trainer = DeepLearningTrainer(model, problem_type='classification')
trainer.compile_model(optimizer='adam', loss='binary_crossentropy')
history = trainer.train(X_train, y_train, validation_data=(X_test, y_test))
print(f"Final accuracy: {history['accuracy'][-1]:.3f}")
CNN for Image Classification
python
# CNN architecture for images
cnn_layers = [
{'type': 'Conv2D', 'filters': 32, 'kernel_size': (3, 3), 'activation': 'relu'},
{'type': 'MaxPooling2D', 'pool_size': (2, 2)},
{'type': 'Conv2D', 'filters': 64, 'kernel_size': (3, 3), 'activation': 'relu'},
{'type': 'MaxPooling2D', 'pool_size': (2, 2)},
{'type': 'Flatten'},
{'type': 'Dense', 'units': 128, 'activation': 'relu'},
{'type': 'Dropout', 'rate': 0.5},
{'type': 'Dense', 'units': 10, 'activation': 'softmax'}
]
cnn_model = designer.create_network(
input_shape=(28, 28, 1),
layers=cnn_layers,
problem_type='classification',
num_classes=10
)
# Train CNN
trainer = DeepLearningTrainer(cnn_model, problem_type='classification')
trainer.compile_model(optimizer='adam', loss='categorical_crossentropy')
cnn_history = trainer.train(X_train, y_train, validation_data=(X_test, y_test))
print(f"CNN trained with {len(cnn_model.layers)} layers")
LSTM for Sequential Data
python
# LSTM architecture for sequences
lstm_layers = [
{'type': 'LSTM', 'units': 50, 'return_sequences': True},
{'type': 'Dropout', 'rate': 0.2},
{'type': 'LSTM', 'units': 50},
{'type': 'Dropout', 'rate': 0.2},
{'type': 'Dense', 'units': 1, 'activation': 'sigmoid'}
]
lstm_model = designer.create_network(
input_shape=(10, 1), # 10 timesteps, 1 feature
layers=lstm_layers,
problem_type='regression'
)
# Train LSTM
trainer = DeepLearningTrainer(lstm_model, problem_type='regression')
trainer.compile_model(optimizer='adam', loss='mse')
lstm_history = trainer.train(X_train, y_train, validation_data=(X_test, y_test))
print(f"LSTM final loss: {lstm_history['loss'][-1]:.4f}")
Visualization
python
from ai_ml_framework.deep_learning import NetworkVisualizer
# Create visualizer
visualizer = NetworkVisualizer()
# Plot architecture
visualizer.plot_model_architecture(
model,
title="Neural Network Architecture",
save_path="architecture.png"
)
# Plot training history
visualizer.plot_training_history(
history,
title="Training Progress",
save_path="training_history.png"
)
# Create training dashboard
visualizer.create_training_dashboard(
history,
save_path="training_dashboard.html"
)
# Plot feature importance (for interpretability)
visualizer.plot_feature_importance(
model,
feature_names=feature_names,
save_path="feature_importance.png"
)