Skip to main content

Hyperparameter Optimization techniques

 Hyperparameters Optimization Techniques 

  • The process of finding most optimal hyperparameters in machine learning algorithms is called hyperparameter optimization. 
  • Common algorithms include: 
    • Grid Search 
    • Random Search

Grid search 

  • It is a very traditional technique for implementing hyperparameters. It brute force all combinations then validation technique ensures the trained model gets most of the patterns from the dataset. 
  • The Grid search method is a simpler algorithm to use but it suffers if data have high dimensional space called the curse of dimensionality. This is significant as the performance of the entire model is based on the hyper parameter values specified. 
  • Python Implementation for grid searchCv using Sklearn for KNN algorithms.

from sklearn.model_selection import GridSearchCV
from sklearn import datasets
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score,confusion_matrix,classification_report
from pprint import pprint


#List Hyperparameters that we want to tune.
leaf_size = list(range(1,50))
n_neighbors = list(range(1,30))
p=[1,2]

#Convert to dictionary
hyperparameters = dict(leaf_size=leaf_size, n_neighbors=n_neighbors, p=p)


#Create new KNN object
GridSearchKNNClassifier = KNeighborsClassifier()

#Use GridSearch
clf = GridSearchCV(GridSearchKNNClassifier, hyperparameters, cv=10)
#Fit the model
best_model = clf.fit(X_train,y_train)
#Print The value of best Hyperparameters
print('Best leaf_size:', best_model.best_estimator_.get_params()['leaf_size'])
print('Best p:', best_model.best_estimator_.get_params()['p'])
print('Best n_neighbors:', best_model.best_estimator_.get_params()['n_neighbors'])



Random search 

  • Random search is a technique where random combinations of the hyperparameters are used to find the best solution for the built model. It tries random combinations of a range of values. 
  • To optimize with random search, the function is evaluated at some number of random configurations in the parameter space.


from sklearn.model_selection import RandomizedSearchCV
            
#List Hyperparameters that we want to tune.
leaf_size = list(range(1,50))
n_neighbors = list(range(1,30))
p=[1,2]

#Convert to dictionary
hyperparameters = dict(leaf_size=leaf_size, n_neighbors=n_neighbors, p=p)

# Applying Random Search

random_search_knn =RandomizedSearchCV(KNeighborsClassifier(), hyperparameters, cv=10)

random_search_knn.fit(X_train,y_train)
            
    
#Print The value of best Hyperparameters
print('Best leaf_size:', best_model.best_estimator_.get_params()['leaf_size'])
print('Best p:', best_model.best_estimator_.get_params()['p'])
print('Best n_neighbors:', best_model.best_estimator_.get_params()['n_neighbors'])










Join ML in python channel in telegram , Where you can learn every concepts in Python, Statistics, Data Visualization, Machine Learning, Deep Learning.

  

Join Aptitude Preparation channel in telegram , this channel helps to crack any interview.



Learn Data Science Material which helps to learn concepts in Python, Statistics , Data Visualization, Machine Learning , Deep Learning. And it contains Projects helps to understand the flow of building model , and what are the necessary steps should be taken depending on the data set. Interview Questions helps to crack the interview. 







Comments

Popular posts from this blog

Practice Problems in Python [ Part - 1 ]

                                            Python 1. Write a program which will find all such numbers which are divisible by 3 but are not a multiple of 7,between 2000 and 3200 (both included). soln :            def filter_numbers():           """           function to filter out numbers by extracting numbers           which is divisible by 3 but not multiple of 7.           """           filtered_list=[]           for i in range(2000, 3201):               if (i%3==0) and (i%7!=0):                   filtered_list.append(str(i))    ...

Python Introduction

 Introduction  Python is developed by Guido Van Rossum and released in 1991. Python is high level, interpreted, general purpose programming language. It is one of the top five most used languages in the world. Currently there are 8.2 million developers who code in Python. Python is one of the most preferred languages in the field of Data Science and Artificial Intelligence. Key Features Python is an interpreted language, unlike compiled languages like Java, C, C++, C#, Go etc., Python codes are executed directly even before compiling.  Python is Dynamically typed, no need to mention type of variable before assigning. Python handles it without raising any error. Python codes can be executed on different software or operating systems without changing it. Python supports both Functional and Object oriented programming as it supports creating classes and objects. Python has high number of modules and frameworks support. Python is free and Open Source, which means it is availa...

Types of Machine Learning

                                   Machine Learning  Machine Learning is an application of artificial intelligence where a computer/machine learns from the past experiences (input data) and makes future predictions. It finds the pattern in the data , based on the pattern it gives the future predictions from the unseen data.   It is a way to understand the data and find the patterns in that. Types of Machine Learning        Supervised Machine Learning An algorithm learns from example data and associated target responses that can consist of numeric values or string labels.  Generally the algorithm should find the pattern how input and output is mapped           Two types of Supervised Learning: Regression:  The problem is regression type when the output variable is real or continuous. Example :  Predicting salar...