API Reference
Comprehensive API documentation for all modules and functions in the AI/ML Framework.
๐ Preprocessing Module
DataAnalyzer
class DataAnalyzer()
Comprehensive dataset analysis and quality assessment.
Methods
analyze_dataset(df: pd.DataFrame, target_column: str = None) -> Dict[str, Any]
Analyze dataset characteristics and quality.
df: Input DataFrametarget_column: Target column name (optional)
get_data_quality_score(df: pd.DataFrame) -> float
Calculate data quality score (0-100).
df: Input DataFrame
AutoPreprocessor
class AutoPreprocessor(target_column: str, config: Dict[str, Any] = None)
Automated data preprocessing with AI recommendations.
Methods
fit_transform(df: pd.DataFrame) -> Tuple[pd.DataFrame, pd.Series]
Fit preprocessor and transform data.
df: Input DataFrame
get_preprocessing_steps() -> List[str]
Get list of preprocessing steps applied.
๐ค AutoML Module
AutoMLSelector
class AutoMLSelector(problem_type: str, models: List[str] = None)
Intelligent model selection and training.
Methods
auto_select_and_train(X: pd.DataFrame, y: pd.Series) -> Any
Auto-select and train best model.
X: Feature matrixy: Target vector
create_ensemble(X: pd.DataFrame, y: pd.Series, method: str = 'voting') -> Any
Create ensemble model.
X: Feature matrixy: Target vectormethod: Ensemble method ('voting', 'stacking', 'blending')
HyperparameterOptimizer
class HyperparameterOptimizer(problem_type: str)
Advanced hyperparameter optimization using Optuna.
Methods
optimize_model(model: Any, X: pd.DataFrame, y: pd.Series, n_trials: int = 100) -> Tuple[Any, Dict[str, Any]]
Optimize model hyperparameters.
model: Base model to optimizeX: Feature matrixy: Target vectorn_trials: Number of optimization trials
๐ง Deep Learning Module
NeuralNetworkDesigner
class NeuralNetworkDesigner()
AI-powered neural network architecture design.
Methods
get_ai_recommendations(input_shape: Tuple, problem_type: str, num_classes: int = None) -> Dict[str, Any]
Get AI-powered architecture recommendations.
input_shape: Input shape tupleproblem_type: 'classification', 'regression', 'clustering'num_classes: Number of classes (for classification)
create_network(input_shape: Tuple, layers: List[Dict[str, Any]], problem_type: str, num_classes: int = None) -> Any
Create neural network from layer configuration.
input_shape: Input shape tuplelayers: List of layer configurationsproblem_type: Problem typenum_classes: Number of classes
DeepLearningTrainer
class DeepLearningTrainer(model: Any, problem_type: str, framework: str = 'tensorflow')
Multi-framework deep learning trainer.
Methods
train(X: np.ndarray, y: np.ndarray, validation_data: Tuple = None, epochs: int = 100, batch_size: int = 32) -> Dict[str, List]
Train the model.
X: Training featuresy: Training targetsvalidation_data: Validation data tupleepochs: Number of epochsbatch_size: Batch size
๐ง Pipeline Module
PipelineCreator
class PipelineCreator()
Automated scikit-learn pipeline creation.
Methods
create_auto_pipeline(df: pd.DataFrame, target_column: str, model: Any = None) -> Pipeline
Create automated pipeline.
df: Input DataFrametarget_column: Target column namemodel: Custom model (optional)
create_custom_pipeline(df: pd.DataFrame, target_column: str, config: Dict[str, Any]) -> Pipeline
Create custom pipeline with configuration.
df: Input DataFrametarget_column: Target column nameconfig: Pipeline configuration
PipelineManager
class PipelineManager(workspace_dir: str = "ai_ml_workspace")
Advanced pipeline management with versioning.
Methods
register_pipeline(pipeline_id: str, name: str, config: Dict[str, Any], pipeline_object: Any, metrics: Dict[str, float] = None) -> str
Register pipeline in the system.
pipeline_id: Unique pipeline identifiername: Pipeline nameconfig: Pipeline configurationpipeline_object: Pipeline objectmetrics: Performance metrics
๐ API Module
APIGenerator
class APIGenerator(model_path: str, model_id: str = None)
Automatic REST API generation for ML models.
Methods
generate_api(title: str = "ML Model API", description: str = "Auto-generated API", version: str = "1.0.0") -> FastAPI
Generate FastAPI application.
title: API titledescription: API descriptionversion: API version
add_authentication(api_key: str)
Add API key authentication.
api_key: API key for authentication
generate_main_script(output_path: str = "main.py")
Generate main.py script for running API.
output_path: Output file path
๐ Visualization Module
MLVisualizer
class MLVisualizer(style: str = 'seaborn-v0_8', figsize: Tuple[int, int] = (12, 8))
Comprehensive ML visualization toolkit.
Methods
plot_data_overview(df: pd.DataFrame, save_path: str = None)
Create comprehensive data overview visualization.
df: Input DataFramesave_path: Path to save plot
plot_model_comparison(model_results: Dict[str, Dict[str, float]], metric: str = 'accuracy', save_path: str = None)
Compare multiple models performance.
model_results: Model results dictionarymetric: Metric to comparesave_path: Path to save plot
DashboardBuilder
class DashboardBuilder(title: str = "ML Dashboard", layout: str = "wide")
Interactive dashboard builder for ML experiments.
Methods
create_data_exploration_dashboard(df: pd.DataFrame)
Create data exploration dashboard.
df: Input DataFrame
run_dashboard(dashboard_type: str = "experiment", **kwargs)
Run the dashboard.
dashboard_type: Type of dashboard**kwargs: Additional arguments
๐ฏ Utils Module
AIRecommendationsEngine
class AIRecommendationsEngine()
AI-powered recommendations engine.
Methods
generate_comprehensive_report(df: pd.DataFrame, target_column: str = None, current_model: str = None, current_performance: Dict[str, float] = None) -> RecommendationReport
Generate comprehensive recommendation report.
df: Input DataFrametarget_column: Target column namecurrent_model: Current model typecurrent_performance: Current model performance
๐งช Experiments Module
ExperimentTracker
class ExperimentTracker(tracking_uri: str = None, experiment_name: str = None, backend: str = "mlflow")
Comprehensive experiment tracking system.
Methods
start_run(run_name: str = None, tags: Dict[str, str] = None) -> str
Start new experiment run.
run_name: Run nametags: Run tags
log_params(params: Dict[str, Any])
Log parameters to current run.
params: Parameters dictionary
log_metrics(metrics: Dict[str, float], step: int = None)
Log metrics to current run.
metrics: Metrics dictionarystep: Step number
log_model(model: Any, model_name: str, framework: str = "sklearn")
Log model to current run.
model: Model objectmodel_name: Model nameframework: Model framework
VersionManager
class VersionManager(storage_path: str = "version_storage")
Advanced version management for ML artifacts.
Methods
save_artifact(artifact_path: str, name: str, version: VersionInfo, artifact_type: str, creator: str = "user", metadata: Dict[str, Any] = None, dependencies: List[str] = None, tags: List[str] = None) -> str
Save artifact with versioning.
artifact_path: Path to artifactname: Artifact nameversion: Version informationartifact_type: Artifact typecreator: Creator namemetadata: Additional metadatadependencies: Dependency artifactstags: Artifact tags
๐ Quick Functions
quick_preprocess(df: pd.DataFrame, target_column: str) -> Tuple[pd.DataFrame, pd.Series]
Quick preprocessing for common use cases.
df: Input DataFrametarget_column: Target column name
quick_automl(X: pd.DataFrame, y: pd.Series, problem_type: str = 'classification') -> Any
Quick AutoML for common use cases.
X: Feature matrixy: Target vectorproblem_type: Problem type
quick_pipeline(df: pd.DataFrame, target_column: str, model: Any = None) -> Pipeline
Quick pipeline creation.
df: Input DataFrametarget_column: Target column namemodel: Custom model
quick_api(model_path: str, model_id: str = "model") -> APIGenerator
Quick API generation.
model_path: Path to saved modelmodel_id: Model identifier