Course Library

Browse our professional certification programs

CISSP

These eight domains represent the core areas of information security and cybersecurity management. They cover everything from protecting data and managing risks to securing networks, systems, and software. The framework ensures organizations can safeguard their assets, maintain compliance, and respond effectively to threats. 

120
View Details

The Al-Powered Stock Market Momentum Investor Course

The AI-Powered Stock Market Momentum Investor Course is a practical training program that teaches how to identify high-potential stocks using momentum investing strategies and AI-based research.It covers key concepts like volume analysis, sector selection, technical indicators, and chart reading to help learners make informed investment decisions. The course focuses on building a systematic and repeatable approach for long-term wealth creation.

120
View Details

CCNP Enterprise

Detailed training curriculum available.

120 Hours
View Details

AWS DevOps

IPsolutions provides training courses for AWS DevOps certification, including the tools that automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps, Using hands-on learning, students can acquire new and realistic AWS DevOps skills. Training courses for Amazon Web Services are the training courses that were designed to train the students on Amazon Web Services and their related roles and features. Students will learn about DevOps is a combination of software development (dev) and operations (ops)

40 hrs
View Details

Microsoft Azure

Discover how to optimize Azure workloads on Windows Server and discover Microsoft Azures IaaS infrastructure, services, software, and portals. Gain knowledge of virtual network management and deployment in Azure, deploy websites and data services, manage Azure Content Delivery Networks, and build and maintain Azure Active Directory.

40 hrs
View Details

NLP

# DL-1-Performing matrix multiplication and finding eigen vectors and eigen values using TensorFlow.ipynbimport tensorflow as tfprint("Matrix Multiplication Demo")x = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])print(x)y = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])print(y)z = tf.matmul(x, y)print("Product:", z)e_matrix_A = tf.random.uniform(    [2, 2], minval=3, maxval=10, dtype=tf.float32, name="matrixA")print("Matrix A:\n{}\n\n".format(e_matrix_A))eigen_values_A, eigen_vectors_A = tf.linalg.eigh(e_matrix_A)print(    "Eigen Vectors:\n{}\n\nEigen Values:\n{}\n".format(eigen_vectors_A, eigen_values_A))***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-2-Solving XOR problem using deep feed forward network.ipynbimport numpy as npfrom keras.layers import Densefrom keras.models import Sequential# Create a sequential modelmodel = Sequential()# Add layers to the modelmodel.add(Dense(units=2, activation='relu', input_dim=2))model.add(Dense(units=1, activation='sigmoid'))# Compile the modelmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])# Print model summaryprint(model.summary())# Print initial weightsprint("Initial weights:")print(model.get_weights())# Define the input data and labelsX = np.array([[0., 0.], [0., 1.], [1., 0.], [1., 1.]])Y = np.array([0., 1., 1., 0.])# Fit the modelmodel.fit(X, Y, epochs=1000, batch_size=4, verbose=1)# Print weights after trainingprint("Weights after training:")print(model.get_weights())# Make predictionsprint("Predictions:")print(model.predict(X, batch_size=4))***********    ***********    ***********    ***********    ***********    ***********    ***********# Aim: Implementing deep neural network for performing binary classification task.# pip install kerasfrom keras.models import Sequentialfrom keras.layers import Denseimport pandas as pdnames = [    "No. of pregnancies",    "Glucose level",    "Blood Pressure",    "skin thickness",    "Insulin",    "BMI",    "Diabetes pedigree",    "Age",    "Class",]#csv file with no column names expecteddf = pd.read_csv("/content/pima-indians-diabetes.data.csv", names=names)df.head(3)binaryc = Sequential()from tensorflow.tools.docs.doc_controls import doc_in_current_and_subclassesbinaryc.add(Dense(units=10, activation="relu", input_dim=8))binaryc.add(Dense(units=8, activation="relu"))binaryc.add(Dense(units=1, activation="sigmoid"))binaryc.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])X = df.iloc[:, :-1]y = df.iloc[:, -1]from sklearn.model_selection import train_test_splitxtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=0.25, random_state=1)xtrain.shapeytrain.shapebinaryc.fit(xtrain, ytrain, epochs=200, batch_size=20)predictions = binaryc.predict(xtest)predictions.shapeclass_labels = []for i in predictions:    if i > 0.5:        class_labels.append(1)    else:        class_labels.append(0)class_labelsfrom sklearn.metrics import accuracy_scoreprint("Accuracy Score", accuracy_score(ytest, class_labels))***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-4a-Using feed Forward Network with multiple hidden layers for performing multiclass classification and predicting the class.ipynb#------required flower_1.csv DATA SETfrom keras.models import Sequentialfrom keras.layers import Denseimport pandas as pdimport numpy as npdf = pd.read_csv("/content/flower_1.csv")#df = pd.read_csv("data/flower_1.csv")# df = pd.read_csv("flower_1.csv")df.head()x=df.iloc[:,:-1].astype(float)y=df.iloc[:,-1]print(x.shape)print(y.shape)#labelencode yfrom sklearn.preprocessing import LabelEncoderlb=LabelEncoder()y=lb.fit_transform(y)yimport numpy as npfrom tensorflow.keras.utils import to_categorical#from keras.utils import np_utilsencoded_Y = to_categorical(y)encoded_Y#creating a modelmodel = Sequential()model.add(Dense(units = 10, activation = 'relu', input_dim = 4))model.add(Dense(units = 8, activation = 'relu'))model.add(Dense(units = 3, activation = 'softmax'))model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])model.fit(x,encoded_Y,epochs = 400,batch_size = 10)predict = model.predict(x)print(predict)for i in range(35,150,3):    print(predict[i],encoded_Y[i])actual = []for i in range(0,150):    actual.append(np.argmax(predict[i]))print(actual)newdf = pd.DataFrame(list(zip(actual,y)),columns = ['Actual','Predicted'])newdf***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-4b-Using a deep feed forward network with two hidden layers for performing classification and predicting the probability of class.ipynbimport tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Densefrom tensorflow.keras.optimizers import Adamfrom tensorflow.keras.utils import to_categoricalfrom sklearn.model_selection import train_test_splitfrom sklearn.datasets import make_classification# Create a synthetic dataset for binary classificationX, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)# Split the dataset into training and testing setsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# Convert the labels to one-hot encoded format for categorical crossentropy lossy_train = to_categorical(y_train)y_test = to_categorical(y_test)# Initialize the modelmodel = Sequential()# Add the first hidden layer with 64 neurons and ReLU activation functionmodel.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))# Add the second hidden layer with 32 neurons and ReLU activation functionmodel.add(Dense(32, activation='relu'))# Add the output layer with softmax activation for classification (2 classes)model.add(Dense(2, activation='softmax'))# Compile the model with categorical crossentropy loss, Adam optimizer, and accuracy as a metricmodel.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])# Train the model with the training datamodel.fit(X_train, y_train, epochs=20, batch_size=32, validation_data=(X_test, y_test))# Evaluate the model on the test dataloss, accuracy = model.evaluate(X_test, y_test)print(f"Test Accuracy: {accuracy*100:.2f}%")# Predict class probabilities for the test dataprobabilities = model.predict(X_test)# Display the first 5 predictionsprint(probabilities[:5])***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-4c-Using a deep feed forward network with two hidden layers for performing linear regression and predicting values.ipynbimport tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Densefrom tensorflow.keras.optimizers import Adamfrom sklearn.model_selection import train_test_splitfrom sklearn.datasets import make_regression# Create a synthetic dataset for regressionX, y = make_regression(n_samples=1000, n_features=20, noise=0.1, random_state=42)# Split the dataset into training and testing setsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# Initialize the modelmodel = Sequential()# Add the first hidden layer with 64 neurons and ReLU activation functionmodel.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))# Add the second hidden layer with 32 neurons and ReLU activation functionmodel.add(Dense(32, activation='relu'))# Add the output layer with no activation function (linear output for regression)model.add(Dense(1))# Compile the model with mean squared error loss, Adam optimizer, and mean absolute error as a metricmodel.compile(loss='mean_squared_error', optimizer=Adam(), metrics=['mean_absolute_error'])# Train the model with the training datamodel.fit(X_train, y_train, epochs=20, batch_size=32, validation_data=(X_test, y_test))# Evaluate the model on the test dataloss, mae = model.evaluate(X_test, y_test)print(f"Test Mean Absolute Error: {mae:.2f}")# Predict values for the test datapredictions = model.predict(X_test)# Display the first 5 predictions and actual valuesprint("Predictions:", predictions[:5].flatten())print("Actual values:", y_test[:5])***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-5a-Evaluating feed forward deep network for regression using KFold cross validation.ipynb!pip install keras (2.15.0)!pip install scikit_learn!pip install scikerasimport pandas as pdfrom keras.models import Sequentialfrom keras.layers import Dense# from keras.wrappers.scikit_learn import KerasRegressorfrom scikeras.wrappers import KerasRegressorfrom sklearn.model_selection import cross_val_score, KFoldfrom sklearn.preprocessing import StandardScalerfrom sklearn.pipeline import Pipelinefrom sklearn.neural_network import MLPRegressor#dataframe = pd.read_csv("MscIT\Semester 4\Deep_Learning\Practical05\housing.csv")dataframe = pd.read_csv("/content/housing.csv")dataset = dataframe.values# Print the shape of dataset to verify the number of features and samplesprint("Shape of dataset:", dataset.shape)# Ensure correct slicing for features and target variableX = dataset[:, :-1]  # Select all columns except the last one as featuresY = dataset[:, -1]   # Select the last column as target variabledef wider_model():    model = Sequential()    model.add(Dense(15, input_dim=13, kernel_initializer='normal', activation='relu'))    # model.add(Dense(20, input_dim=13, kernel_initializer='normal', activation='relu'))    model.add(Dense(13, kernel_initializer='normal', activation='relu'))    model.add(Dense(1, kernel_initializer='normal'))    model.compile(loss='mean_squared_error', optimizer='adam')    return modelestimators = []estimators.append(('standardize', StandardScaler()))estimators.append(('mlp', KerasRegressor(build_fn=wider_model, epochs=10, batch_size=5)))pipeline = Pipeline(estimators)kfold = KFold(n_splits=10)results = cross_val_score(pipeline, X, Y, cv=kfold)print("Wider: %.2f (%.2f) MSE" % (results.mean(), results.std()))***********    ***********    ***********    ***********    ***********    ***********    ***********# 5B. Evaluating feed forward deep network for multiclass Classification using KFold cross-validation.!pip install scikeras!pip install np_utils# loading librariesimport pandasfrom keras.models import Sequentialfrom keras.layers import Densefrom scikeras.wrappers import KerasClassifierfrom tensorflow.keras.utils import to_categoricalfrom sklearn.model_selection import cross_val_scorefrom sklearn.model_selection import KFoldfrom sklearn.preprocessing import LabelEncoder# loading datasetdf = pandas.read_csv('/content/flowers.csv', header=0)print(df)# splitting dataset into input and output variablesX = df.iloc[:, 0:4].astype(float)y = df.iloc[:, 4]# print(X)# print(y)# encoding string output into numeric outputencoder = LabelEncoder()encoder.fit(y)encoded_y = encoder.transform(y)print(encoded_y)dummy_Y = to_categorical(encoded_y)print(dummy_Y)def baseline_model():    # create model    model = Sequential()    model.add(Dense(8, input_dim=4, activation='relu'))    model.add(Dense(3, activation='softmax'))    # Compile model    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])    return modelestimator = baseline_model()estimator.fit(X, dummy_Y, epochs=100, shuffle=True)action = estimator.predict(X)for i in range(25):    print(dummy_Y[i])    print('^^^^^^^^^^^^^^^^^^^^^^')for i in range(25):    print(action[i])***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-6a-Implementing regularization to avoid overfitting in binary classification.ipynbfrom matplotlib import pyplotfrom sklearn.datasets import make_moonsfrom keras.models import Sequentialfrom keras.layers import DenseX,Y=make_moons(n_samples=100,noise=0.2,random_state=1)n_train=30trainX,testX=X[:n_train,:],X[n_train:]trainY,testY=Y[:n_train],Y[n_train:]#print(trainX)#print(trainY)#print(testX)#print(testY)model=Sequential()model.add(Dense(500,input_dim=2,activation='relu'))model.add(Dense(1,activation='sigmoid'))model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])history=model.fit(trainX,trainY,validation_data=(testX,testY),epochs=1000)pyplot.plot(history.history['accuracy'],label='train')pyplot.plot(history.history['val_accuracy'],label='test')pyplot.legend()pyplot.show()pyplot.plot(history.history['accuracy'],label='train')pyplot.plot(history.history['val_accuracy'],label='test')pyplot.legend()pyplot.show()***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-6b-Implement l2 regularization with alpha=0.001.ipynbfrom matplotlib import pyplotfrom sklearn.datasets import make_moonsfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.regularizers import l2X,Y=make_moons(n_samples=100,noise=0.2,random_state=1)n_train=30trainX,testX=X[:n_train,:],X[n_train:]trainY,testY=Y[:n_train],Y[n_train:]#print(trainX)#print(trainY)#print(testX)#print(testY)model=Sequential()model.add(Dense(500,input_dim=2,activation='relu',kernel_regularizer=l2(0.001)))model.add(Dense(1,activation='sigmoid'))model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])history=model.fit(trainX,trainY,validation_data=(testX,testY),epochs=1000)pyplot.plot(history.history['accuracy'],label='train')pyplot.plot(history.history['val_accuracy'],label='test')pyplot.legend()pyplot.show()***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-6c-Replace l2 regularization with l2 regularization.ipynb# !pip install pandas# !pip install matplotlib# !pip install keras# !pip install tensorflowfrom matplotlib import pyplotfrom sklearn.datasets import make_moonsfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.regularizers import l1_l2X,Y=make_moons(n_samples=100,noise=0.2,random_state=1)n_train=30trainX,testX=X[:n_train,:],X[n_train:]trainY,testY=Y[:n_train],Y[n_train:]#print(trainX)#print(trainY)#print(testX)#print(testY)model=Sequential()model.add(Dense(500,input_dim=2,activation='relu',kernel_regularizer=l1_l2(l1=0.001,l2=0.001)))model.add(Dense(1,activation='sigmoid'))model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])history=model.fit(trainX,trainY,validation_data=(testX,testY),epochs=400)pyplot.plot(history.history['accuracy'],label='train')pyplot.plot(history.history['val_accuracy'],label='test')pyplot.legend()pyplot.show()***********    ***********    ***********    ***********    ***********    ***********    ***********#DL-7-Demonstrate recurrent neural network that learns to perform sequence analysis for stock price.ipynbimport numpy as npimport matplotlib.pyplot as pltimport pandas as pdfrom keras.models import Sequentialfrom keras.layers import Dense, LSTM, Dropoutfrom sklearn.preprocessing import MinMaxScaler# Read training datasetdataset_train = pd.read_csv('/content/Google_Stock_Price_Train.csv')training_set = dataset_train.iloc[:, 1:2].values# Scale the training setsc = MinMaxScaler(feature_range=(0,1))training_set_scaled = sc.fit_transform(training_set)# Create X_train and Y_trainX_train = []Y_train = []for i in range(60, 1258):    X_train.append(training_set_scaled[i-60:i, 0])    Y_train.append(training_set_scaled[i, 0])X_train, Y_train = np.array(X_train), np.array(Y_train)# Reshape X_train for LSTMX_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))# Build the LSTM modelregressor = Sequential()regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1], 1)))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50, return_sequences=True))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50, return_sequences=True))regressor.add(Dropout(0.2))regressor.add(LSTM(units=50))regressor.add(Dropout(0.2))regressor.add(Dense(units=1))regressor.compile(optimizer='adam', loss='mean_squared_error')# Train the modelregressor.fit(X_train, Y_train, epochs=100, batch_size=32)# Read test datasetdataset_test = pd.read_csv('/content/Google_Stock_Price_Test.csv')real_stock_price = dataset_test.iloc[:, 1:2].values# Concatenate total datasetdataset_total = pd.concat((dataset_train['Open'], dataset_test['Open']), axis=0)inputs = dataset_total[len(dataset_total)-len(dataset_test)-60:].valuesinputs = inputs.reshape(-1, 1)inputs = sc.transform(inputs)# Create X_testX_test = []for i in range(60, 80):    X_test.append(inputs[i-60:i, 0])X_test = np.array(X_test)X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))# Predict stock pricespredicted_stock_price = regressor.predict(X_test)predicted_stock_price = sc.inverse_transform(predicted_stock_price)# Visualize resultsplt.plot(real_stock_price, color='red', label='Real Google Stock Price')plt.plot(predicted_stock_price, color='blue', label='Predicted Stock Price')plt.xlabel('Time')plt.ylabel('Google Stock Price')plt.legend()plt.show()***********    ***********    ***********    ***********    ***********    ***********    ***********# 8. Performing encoding and decoding of images using deep autoencoder.import kerasfrom keras import layersfrom keras.datasets import mnistimport numpy as npencoding_dim = 32# this is our input imageinput_img = keras.Input(shape=(784,))# "encoded" is the encoded representation of the inputencoded = layers.Dense(encoding_dim, activation='relu')(input_img)# "decoded" is the lossy reconstruction of the inputdecoded = layers.Dense(784, activation='sigmoid')(encoded)# creating autoencoder modelautoencoder = keras.Model(input_img, decoded)# create the encoder modelencoder = keras.Model(input_img, encoded)encoded_input = keras.Input(shape=(encoding_dim,))# Retrieve the last layer of the autoencoder modeldecoder_layer = autoencoder.layers[-1]# create the decoder modeldecoder = keras.Model(encoded_input, decoder_layer(encoded_input))autoencoder.compile(optimizer='adam', loss='binary_crossentropy')# scale and make train and test dataset(X_train, _), (X_test, _) = mnist.load_data()X_train = X_train.astype('float32') / 255.X_test = X_test.astype('float32') / 255.X_train = X_train.reshape((len(X_train), np.prod(X_train.shape[1:])))X_test = X_test.reshape((len(X_test), np.prod(X_test.shape[1:])))print(X_train.shape)print(X_test.shape)# train autoencoder with training datasetautoencoder.fit(X_train, X_train,                epochs=50,                batch_size=256,                shuffle=True,                validation_data=(X_test, X_test))encoded_imgs = encoder.predict(X_test)decoded_imgs = decoder.predict(encoded_imgs)import matplotlib.pyplot as pltn = 10  # How many digits we will displayplt.figure(figsize=(40, 4))for i in range(10):    # display original    ax = plt.subplot(3, 20, i + 1)    plt.imshow(X_test[i].reshape(28, 28))    plt.gray()    ax.get_xaxis().set_visible(False)    ax.get_yaxis().set_visible(False)    # display encoded image    ax = plt.subplot(3, 20, i + 1 + 20)    plt.imshow(encoded_imgs[i].reshape(8, 4))    plt.gray()    ax.get_xaxis().set_visible(False)    ax.get_yaxis().set_visible(False)    # display reconstruction    ax = plt.subplot(3, 20, 2 * 20 + i + 1)    plt.imshow(decoded_imgs[i].reshape(28, 28))    plt.gray()    ax.get_xaxis().set_visible(False)    ax.get_yaxis().set_visible(False)plt.show() ***********    ***********    ***********    ***********    ***********    ***********    ***********# 9. Aim: Implementation of convolutional neural network to predict number from number imagesfrom keras.datasets import mnistfrom keras.utils import to_categoricalfrom keras.models import Sequentialfrom keras.layers import Dense, Conv2D, Flattenimport matplotlib.pyplot as plt# Download MNIST data and split into train and test sets(X_train, Y_train), (X_test, Y_test) = mnist.load_data()# Plot the first image in the datasetplt.imshow(X_train[0])plt.show()print(X_train[0].shape)# Reshape data for CNN (add channel dimension)X_train = X_train.reshape(60000, 28, 28, 1)X_test = X_test.reshape(10000, 28, 28, 1)# One-hot encode labelsY_train = to_categorical(Y_train)Y_test = to_categorical(Y_test)# Print an example of one-hot encoded labelprint(Y_train[0])# Define the model architecturemodel = Sequential()# Learn image features with convolutional layersmodel.add(Conv2D(64, kernel_size=3, activation='relu', input_shape=(28, 28, 1)))model.add(Conv2D(32, kernel_size=3, activation='relu'))model.add(Flatten())# Add a dense layer with softmax activation for 10-class classificationmodel.add(Dense(10, activation='softmax'))# Compile the model for trainingmodel.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])# Train the model with validation datamodel.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=3)# Make predictions on the first 4 test imagespredictions = model.predict(X_test[:4])print(predictions)  # Predicted probabilities for each class# Print the actual labels for the first 4 test imagesprint(Y_test[:4])  # One-hot encoded labels ***********    ***********    ***********    ***********    ***********    ***********    ***********# 10. Denoising of images using autoencoder.import kerasfrom keras.datasets import mnistfrom keras import layersimport numpy as npfrom keras.callbacks import TensorBoardimport matplotlib.pyplot as plt(X_train,_),(X_test,_)=mnist.load_data()X_train=X_train.astype('float32')/255.X_test=X_test.astype('float32')/255.X_train=np.reshape(X_train,(len(X_train),28,28,1))X_test=np.reshape(X_test,(len(X_test),28,28,1))noise_factor=0.5X_train_noisy=X_train+noise_factor*np.random.normal(loc=0.0,scale=1.0,size=X_train.shape)X_test_noisy=X_test+noise_factor*np.random.normal(loc=0.0,scale=1.0,size=X_test.shape)X_train_noisy=np.clip(X_train_noisy,0.,1.)X_test_noisy=np.clip(X_test_noisy,0.,1.)n=10plt.figure(figsize=(20,2))for i in range(1,n+1):    ax=plt.subplot(1,n,i)    plt.imshow(X_test_noisy[i].reshape(28,28))    plt.gray()    ax.get_xaxis().set_visible(False)    ax.get_yaxis().set_visible(False)plt.show()input_img=keras.Input(shape=(28,28,1))x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(input_img)x=layers.MaxPooling2D((2,2),padding='same')(x)x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(x)encoded=layers.MaxPooling2D((2,2),padding='same')(x)x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(encoded)x=layers.UpSampling2D((2,2))(x)x=layers.Conv2D(32,(3,3),activation='relu',padding='same')(x)x=layers.UpSampling2D((2,2))(x)decoded=layers.Conv2D(1,(3,3),activation='sigmoid',padding='same')(x)autoencoder=keras.Model(input_img,decoded)autoencoder.compile(optimizer='adam',loss='binary_crossentropy')autoencoder.fit(X_train_noisy,X_train, epochs=3, batch_size=128, shuffle=True, validation_data=(X_test_noisy,X_test), callbacks=[TensorBoard(log_dir='/tmo/tb',histogram_freq=0,write_graph=False)])predictions=autoencoder.predict(X_test_noisy)m=10plt.figure(figsize=(20,2))for i in range(1,m+1):    ax=plt.subplot(1,m,i)    plt.imshow(predictions[i].reshape(28,28))    plt.gray()    ax.get_xaxis().set_visible(False)    ax.get_yaxis().set_visible(False)plt.show() 

Self-Paced View Details

aai

********Design An Expert System using AIML. (If…Else..Else…If)# Design An Expert System using AIML # An Expert system for responding the patient query for identifying the flu. # Create an empty list to store informationinfo = [] # Input the user's name and add it to the 'info' listname = input("Enter Your name: ")info.append(name) # Input the user's age as an integer and add it to the 'info' listage = int(input("Enter Your age: "))info.append(age) # Lists of common symptoms for Malaria and Diabetesa = ["Fever", "Headache", "Tiredness", "Vomiting"]b = ["Urinate A Lot", "Feels Thirsty", "Weight Loss", "Blurry Vision", "Feels Very Hungry", "Feels Very Tired"] # Print the lists of symptomsprint("Common Symptoms for Malaria:", a)print("Common Symptoms for Diabetes:", b) # Input symptoms separated by a comma and split them into a listsymp = input("Enter Symptoms As Above Separated By Comma: ")lst = symp.split(",") # Print the user's informationprint("User Information:")print("Name:", info[0])print("Age:", info[1]) print("Symptoms:")# Loop through the list of symptoms and print each onefor symptom in lst:    print(symptom.strip()) # Check if any symptom matches the symptoms for Malaria or Diabetesfor symptom in lst:    if symptom.strip() in a:        print("You May Have Malaria")        print("Please Visit A Doctor")        break    elif symptom.strip() in b:        print("You May Have Diabetes")        print("Consider Reducing Sugar Intake")        breakelse:    print("Symptoms Do Not Match Common Health Conditions") ********Design An Expert System using AIML 2.# Design An Expert System using AIML # Input user's namename = input("Enter your name: ") # Input whether the user has a fever, cough, shortness of breath, sore throat, muscle pain, and headache (Y/N)fever = input("DO YOU HAVE fever (Y/N)").lower()cough = input("DO YOU HAVE cough (Y/N)").lower()sob = input("DO YOU HAVE shortness of breath (Y/N)").lower()st = input("DO YOU HAVE sore throat (Y/N)").lower()mp = input("DO YOU HAVE muscle pain (Y/N)").lower()hc = input("DO YOU HAVE headache(Y/N)").lower() # Input whether the user has diarrhea, conjunctivitis, loss of taste, chest pain or pressure, and loss of speech or movement (Y/N)diarrhoea = input("DO YOU HAVE diarrhea (Y/N)").lower()conjunctivitis = input("DO YOU HAVE conjunctivitis (Y/N)").lower()lot = input("DO YOU HAVE Loss OF taste (Y/N)").lower()cp = input("DO YOU HAVE chest pain or pressure (Y/N)").lower()lsp = input("DO YOU HAVE Loss Of Speech or movement (Y/N)").lower() # Check for different conditions based on symptomsif fever == "y" and cough == "y" and sob == "y" and st == "y" and mp == "y" and hc == "y":    print(name + " YOU HAVE FLU")    med = input("Sir/Ma'am would you like to look at some medicine for flu (Y/N)").lower()    if med == "y":        print("Disclaimer: Contact a doctor for better guidance")        print("There are four FDA-approved antiviral drugs recommended by CDC to treat flu this season")        print("1. Oseltamivir phosphate")        print("2. Zanamivir")        print("3. Peramivir")        print("4. Baloxavir marboxil")elif diarrhoea == "y" and st == "y" and fever == "y" and cough == "y" and conjunctivitis == "y" and lot == "y":    print(name + " YOU HAVE CORONA")    med = input("Sir/Ma'am would you like to look at some remedies for Corona (Y/N)").lower()    if med == "y":        print("TAKE VACCINE AND QUARANTINE")elif fever == "y" and cough == "y":    print(name + " YOU HAVE Common Cold")    med = input("Sir/Ma'am would you like to look at some remedies for common cold (Y/N)").lower()    if med == "y":        print("Disclaimer: Contact a doctor for better guidance")        print("Treatment consists of anti-inflammatories and decongestants")        print("Most people recover on their own")        print("1. Nonsteroidal anti-inflammatory drug")        print("2. Analgesic")        print("3. Antihistamine")        print("4. Cough medicine")        print("5. Decongestant")else:    print("Unable to identify") ********Design a Chatbot using AIML Create a new AIML file named basic_chatbot.aiml:xml<aiml version="1.0.1" encoding="UTF-8">    <!-- Basic chatbot AIML file -->     <category>        <pattern>HELLO</pattern>        <template>Hello! How can I help you today?</template>    </category>     <category>        <pattern>WHAT IS YOUR PURPOSE</pattern>        <template>I'm here to assist you and answer your questions.</template>    </category>     <category>        <pattern>GOODBYE</pattern>        <template>Goodbye! Have a great day!</template>    </category>     <category>        <pattern>*</pattern>        <template>I'm sorry, I don't understand. Can you please rephrase?</template>    </category> </aiml>  Python Code import aiml # Create the Kernel and learn AIML fileskernel = aiml.Kernel()kernel.learn("basic_chatbot.aiml") # Main loopwhile True:    # User input    user_input = input("You: ")     # Bot response    bot_response = kernel.respond(user_input)    print("Bot:", bot_response)     ********Implement Bayes Theorem using Python.def bayes_theorem(p_h, p_e_given_h, p_e_given_not_h):    p_not_h = 1 - p_h    p_e = (p_e_given_h * p_h) + (p_e_given_not_h * p_not_h)    p_h_given_e = (p_e_given_h * p_h) / p_e    return p_h_given_e p_h = float(input("Enter the probability of NK having a cold: "))p_e_given_h = float(    input("Enter the probability of observing sneezing when NK has a cold: "))p_e_given_not_h = float(    input(        "Enter the probability of observing sneezing when NK does not have a cold: "    )) result = bayes_theorem(p_h, p_e_given_h, p_e_given_not_h) print(    "NK's probability of having a cold given that he sneezes (P(H|E)) is:",    round(result, 2),) ********Implement Bayes Theorem using Python.def drug_user(    prob_th=0.5, sensitivity=0.97, specificity=0.95, prevelance=0.005, verbose=True):    # FORMULA    p_user = prevelance    p_non_user = 1 - prevelance    p_pos_user = sensitivity    p_neg_user = specificity    p_pos_non_user = 1 - specificity    num = p_pos_user * p_user    den = p_pos_user * p_user + p_pos_non_user * p_non_user    prob = num / den    print("Probability of the NK being a drug user is", round(prob, 3))    if verbose:        if prob > prob_th:            print("The NK could be an user")        else:            print("The NK may not be an user")    return prob drug_user() ********Write an application to implement DFS algorithmgraph = {"5": ["3", "7"], "3": ["2", "4"], "7": ["8"], "2": [], "4": ["8"], "8": []} visited = []  # List for visited nodes.queue = []  # Initialize a queue  def bfs(visited, graph, node):  # function for BFS    visited.append(node)    queue.append(node)     while queue:  # Creating loop to visit each node        m = queue.pop(0)        print(m, end=" ")         for neighbour in graph[m]:            if neighbour not in visited:                visited.append(neighbour)                queue.append(neighbour)  # Driver Codeprint("Following is the Breadth-First Search")bfs(visited, graph, "5")  # function calling ********Write an application to implement DFS / BFS algorithm####################################################### Using a Python dictionary to act as an adjacency listgraph = {"5": ["3", "7"], "3": ["2", "4"], "7": ["8"], "2": [], "4": ["8"], "8": []} visited = set()  # Set to keep track of visited nodes of graph.  def dfs(visited, graph, node):  # function for dfs    if node not in visited:        print(node)        visited.add(node)        for neighbour in graph[node]:            dfs(visited, graph, neighbour)  # Driver Codeprint("Following is the Depth-First Search")dfs(visited, graph, "5") ******** Rule Based System.male(vijay).male(mahadev).male(gaurihar).male(omkar).male(bajrang).male(chaitanya). female(vasanti).female(indubai).female(ashwini).female(gayatri).female(sangita). parent(vijay,chaitanya).parent(vasanti,chaitanya).parent(vijay,gaurihar).parent(vasanti,gaurihar).parent(vijay,ashwini).parent(vasanti,ashwini).parent(mahadev,vijay).parent(indubai,vijay). mother(X,Y):-parent(X,Y),female(X).father(X,Y):- parent(X,Y), male(X). grandmother(GM,X):- mother(GM,Y) ,parent(Y,X).grandfather(GF,X):- father(GF,Y) ,parent(Y,X). greatgrandmother(GGM,X):- mother(GGM,GM) ,parent(GM,F),parent(F,Y),parent(Y,X).greatgrandfather(GGF,X):- father(GGF,GF) ,parent(GF,F),parent(F,Y),parent(Y,X). sibling(X,Y):-mother(M,X), mother(M,Y),X\=Y, father(F,X), father(F,Y).brother(X,Y):-sibling(X,Y), male(X).sister(X,Y):-sibling(X,Y), female(X). uncle(U,X):- parent(Y,X), brother(U,Y).aunt(A,X):- parent(Y,X), sister(A,Y).nephew(N,X):- sibling(S,X),parent(S,N),male(N).niece(N,X):-sibling(S,X), parent(S,N), female(N).cousin(X,Y):-parent(P,Y),sibling(S,P),parent(S,X). ----------------------------------------------------- Queryfather(X,Y).mother(X,Y). ******** Rule Based System./* https://swish.swi-prolog.org/ */   /* https://swish.swi-prolog.org/ */ /* Facts */male(jack).male(oliver).male(ali).male(james).male(simon).male(harry).female(helen).female(sophie).female(jess).female(lily). parent_of(jack, jess).parent_of(jack, lily).parent_of(helen, jess).parent_of(helen, lily).parent_of(oliver, james).parent_of(sophie, james).parent_of(jess, simon).parent_of(ali, simon).parent_of(lily, harry).parent_of(james, harry). /* Rules */father_of(X, Y):- male(X), parent_of(X, Y). mother_of(X, Y):- female(X), parent_of(X, Y). grandfather_of(X, Y):- male(X), parent_of(X, Z), parent_of(Z, Y). grandmother_of(X, Y):- female(X), parent_of(X, Z), parent_of(Z, Y). sister_of(X, Y):- female(X), father_of(F, Y), father_of(F, X), X \= Y. sister_of(X, Y):- female(X), mother_of(M, Y), mother_of(M, X), X \= Y. aunt_of(X, Y):- female(X), parent_of(Z, Y), sister_of(Z, X), !. brother_of(X, Y):- male(X), father_of(F, Y), father_of(F, X), X \= Y. brother_of(X, Y):- male(X), mother_of(M, Y), mother_of(M, X), X \= Y. uncle_of(X, Y):- parent_of(Z, Y), brother_of(Z, X). ancestor_of(X, Y):- parent_of(X, Y). ancestor_of(X, Y):- parent_of(X, Z), ancestor_of(Z, Y). ----------------------------------------------------- Queryfather_of(X,Y).mother_of(X,Y).  ******** Design a Fuzzy based operations using Python / R.# AAI 6A)            AIM: Design a Fuzzy based operations using Python / R. A = dict()B = dict()Y = dict() # Initialize the dictionaries for fuzzy sets A, B, and the resultA = {"a": 0.2, "b": 0.3, "c": 0.6, "d": 0.6}B = {"a": 0.9, "b": 0.9, "c": 0.4, "d": 0.5}result = {} # Display the fuzzy sets A and Bprint('The First Fuzzy Set is:', A)print('The Second Fuzzy Set is:', B) # Fuzzy Set Unionfor i in A:    if A[i] > B[i]:        result[i] = A[i]    else:        result[i] = B[i]print("Union of two sets is", result) # Fuzzy Set Intersectionresult = {}for i in A:    if A[i] < B[i]:        result[i] = A[i]    else:        result[i] = B[i]print("Intersection of two sets is", result) # Fuzzy Set Complementresult = {}for i in A:    result[i] = round(1 - A[i], 2)print("Complement of First set is", result) # Fuzzy Set Differenceresult = {}for i in A:    result[i] = round(min(A[i], 1 - B[i]), 2)print("Difference of two sets is", result) ******** Design a Fuzzy based application using Python / R.# AAI 6B: AIM: Design a Fuzzy based application using Python / R. # !pip install fuzzywuzzyfrom fuzzywuzzy import fuzzfrom fuzzywuzzy import process s1 = "I love GeeksforGeeks"s2 = "I am loving GeeksforGeeks"print("FuzzyWuzzy Ratio: ", fuzz.ratio(s1, s2))print("FuzzyWuzzy PartialRatio: ", fuzz.partial_ratio(s1, s2))print("FuzzyWuzzy TokenSortRatio: ", fuzz.token_sort_ratio(s1, s2))print("FuzzyWuzzy TokenSetRatio: ", fuzz.token_set_ratio(s1, s2))print("FuzzyWuzzy WRatio: ", fuzz.WRatio(s1, s2), "\n\n") # for process library,query = "geeks for geeks"choices = ["geek for geek", "geek geek", "g. for geeks"]print("List of ratios: ")print(process.extract(query, choices), "\n")print("Best among the above list: ", process.extractOne(query, choices)) ******** Implement joint probability using Python.### 7a) AIM: Implement joint probability using Python.import numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport pandas as pd sns.set() # Read the datasetdata = pd.read_csv("student-mat.csv") # Create a joint plotsns.jointplot(data=data, x="G3", y="absences", kind="kde") # Display the plotplt.show() ******** Implement Conditional probability using Python.### 7b) AIM: Implement Conditional Probability using Python.import pandas as pdimport numpy as np df = pd.read_csv("student-mat.csv") df.head(3)len(df) df["grade_A"] = np.where(df["G3"] * 5 >= 80, 1, 0)df["high_absenses"] = np.where(df["absences"] >= 10, 1, 0)df["count"] = 1 df = df[["grade_A", "high_absenses", "count"]] df.head() pd.pivot_table(    df,    values="count",    index=["grade_A"],    columns=["high_absenses"],    aggfunc=np.size,    fill_value=0,)  ******** ### AIM: Write an application to implement clustering algorithm.## AAI_prac8A_clustering### AIM: Write an application to implement clustering algorithm.import matplotlib.pyplot as pltimport pandas as pdimport numpy as npimport scipy.cluster.hierarchy as shcfrom sklearn.cluster import AgglomerativeClustering # Read the customer data from a CSV filecustomer_data = pd.read_csv("Mall_Customers.csv") # Display the shape and the first few rows of the dataprint(customer_data.shape)customer_data.head() # Extract the relevant columns from the datadata = customer_data.iloc[:, 3:5].values # Create a dendrogram plotplt.figure(figsize=(10, 7))plt.title("Customer Dendrograms")dend = shc.dendrogram(shc.linkage(data, method="ward")) # Perform hierarchical clusteringcluster = AgglomerativeClustering(n_clusters=5, affinity='euclidean', linkage='ward')cluster_labels = cluster.fit_predict(data) # Create a scatter plot to visualize the clustersplt.figure(figsize=(10, 7))plt.scatter(data[:, 0], data[:, 1], c=cluster_labels, cmap='rainbow')plt.show()  ******** SyntheticClassificationfrom numpy import wherefrom sklearn.datasets import make_classificationfrom matplotlib import pyplot x, y = make_classification(    n_samples=1000,    n_features=2,    n_informative=2,    n_redundant=0,    n_clusters_per_class=1,    random_state=4,) for class_value in range(2):    row_ix = where(y == class_value)    pyplot.scatter(x[row_ix, 0], x[row_ix, 1])pyplot.show() ******** SUPERVISED LEARNING METHODS USING PYTHON### Step 1: First we need to import pandas and numpy. Pandas are basically use for table manipulations. Using Pandas package, we are going to upload Titanic training dataset and then by using head () function we will look at first five rows.import pandas as pdimport numpy as nptitanic= pd.read_csv("train.csv")titanic.head() ### Step 2: Create Two Data Frames, one containing categories and one containing numberstitanic_cat = titanic.select_dtypes(object)titanic_num = titanic.select_dtypes(np.number) ### Step 3: Now we need to drop two columns (name column and ticket column)titanic_cat.head()titanic_num.head()titanic_cat.drop(['Name','Ticket'], axis=1, inplace=True) ### Step 4: Now to find the null values present in the above columntitanic_cat.isnull().sum() ### Step 5: Replace all the null values present with the maximum count categorytitanic_cat.Cabin.fillna(titanic_cat.Cabin.value_counts().idxmax(), inplace=True)titanic_cat.Embarked.fillna(titanic_cat.Embarked.value_counts().idxmax(), inplace=True) ### Step 6: After successfully removing all the null values our new data set is ready.titanic_cat.head(20) ### Step 7: The next step will be to replace all the categories with Numerical Labels.For that we will be using LabelEncoders Method.from sklearn.preprocessing import LabelEncoderle = LabelEncoder()titanic_cat = titanic_cat.apply(le.fit_transform)titanic_cat.head()titanic_num.isna().sum() ### Step 8: Now we have only one column left which contain null value in it (Age). Let’s replace it with meantitanic_num.Age.fillna(titanic_num.Age.mean(), inplace=True)titanic_num.isna().sum() ### Step 9: Now we need to remove the unnecessary columns,since the passengerid is an unnecessary column, we need to drop ittitanic_num.drop(['PassengerId'], axis=1, inplace=True)titanic_num.head() ### Step 10: Now we will combine two data frames and make it as onetitanic_final = pd.concat([titanic_cat,titanic_num],axis=1)titanic_final.head() ### Step 11: Now we will define dependent and independent variablesX=titanic_final.drop(['Survived'],axis=1)Y= titanic_final['Survived'] # Step 12: Now we will be taking 80% of the data as our training set, and remaining 20% as our test set. X_train = np.array(X[0:int(0.80*len(X))])Y_train = np.array(Y[0:int(0.80*len(Y))])X_test = np.array(X[int(0.80*len(X)):])Y_test = np.array(Y[int(0.80*len(Y)):])len(X_train), len(Y_train), len(X_test), len(Y_test) ### Step 13: Now we will import all the algorithmsfrom sklearn.linear_model import LogisticRegressionfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.naive_bayes import GaussianNBfrom sklearn.svm import LinearSVCfrom sklearn.svm import SVCfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.ensemble import RandomForestClassifier ### Step 14: Now we will initialize them in respective variablesLR = LogisticRegression()KNN = KNeighborsClassifier()NB = GaussianNB()LSVM = LinearSVC()NLSVM = SVC(kernel='rbf')DT = DecisionTreeClassifier()RF = RandomForestClassifier() ### Step 15: Now we will train our modelLR_fit = LR.fit(X_train, Y_train)KNN_fit = KNN.fit(X_train, Y_train)NB_fit = NB.fit(X_train, Y_train)LSVM_fit = LSVM.fit(X_train, Y_train)NLSVM_fit = NLSVM.fit(X_train, Y_train)DT_fit = DT.fit(X_train, Y_train)RF_fit = RF.fit(X_train, Y_train) ### Step 16: Now we need to predict the test data set and compare the accuracy scoreLR_pred = LR_fit.predict(X_test)KNN_pred = KNN_fit.predict(X_test)NB_pred = NB_fit.predict(X_test)LSVM_pred = LSVM_fit.predict(X_test)NLSVM_pred = NLSVM_fit.predict(X_test)DT_pred = DT_fit.predict(X_test)RF_pred = RF_fit.predict(X_test) from sklearn.metrics import accuracy_scoreprint("Logistic Regression is %f percent accurate" % (accuracy_score(LR_pred, Y_test)*100))print("KNN is %f percent accurate" % (accuracy_score(KNN_pred, Y_test)*100))print("Naive Bayes is %f percent accurate" % (accuracy_score(NB_pred, Y_test)*100))print("Linear SVMs is %f percent accurate" % (accuracy_score(LSVM_pred, Y_test)*100))print("Non Linear SVMs is %f percent accurate" % (accuracy_score(NLSVM_pred, Y_test)*100))print("Decision Trees is %f percent accurate" % (accuracy_score(DT_pred, Y_test)*100))print("Random Forests is %f percent accurate" % (accuracy_score(RF_pred, Y_test)*100))  ******** Design an Artificial Intelligence application to implement intelligent agents.class ClothesAgent:    def __init__(self):        self.weather = None     def get_weather(self):        # Simulating weather conditions (you can modify this as needed)        self.weather = input("Enter the weather (sunny, rainy, windy, snowy): ").lower()     def suggest_clothes(self):        if self.weather == "sunny":            print(                "It's sunny outside. You should wear light clothes, sunglasses, and sunscreen."            )        elif self.weather == "rainy":            print(                "It's rainy outside. Don't forget an umbrella, raincoat, and waterproof shoes."            )        elif self.weather == "windy":            print("It's windy outside. Wear layers and a jacket to stay warm.")        elif self.weather == "snowy":            print(                "It's snowy outside. Dress warmly with a heavy coat, gloves, and boots."            )        else:            print(                "Sorry, I don't understand the weather condition. Please enter sunny, rainy, windy, or snowy."            )  def main():    agent = ClothesAgent()    agent.get_weather()    agent.suggest_clothes()  if __name__ == "__main__":    main()  ******** Design an application to simulate language parser. #11-Design an application to simulate language parser. def sentenceSegment(text):    sentences = []    start = 0     for i in range(len(text)):        if text[i] == "." or text[i] == "!" or text[i] == "?":            sentences.append(text[start : i + 1].strip())            start = i + 1     return sentences  text = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later." print(sentenceSegment(text))#%pip install nltkimport nltk nltk.download("punkt") text = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later." sentences = nltk.sent_tokenize(text) print(sentences)import string  def remove_punctuation(input_string):    # Define a string of punctuation marks and symbols    punctuations = string.punctuation     # Remove the punctuation marks and symbols from the input string    output_string = "".join(char for char in input_string if char not in punctuations)     return output_string  text = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later."sentences = sentenceSegment(text)puncRemovedText = remove_punctuation(text)print(puncRemovedText)def convertToLower(s):    return s.lower()  text = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later."puncRemovedText = remove_punctuation(text) lowerText = convertToLower(puncRemovedText)print(lowerText)# in this code, we are not using any libraries# tokenize without using any function from string or any other function.# only using loops and if/else  def tokenize(s):    words = []  # token words should be stored here    i = 0    word = ""    while i < len(s):        if s[i] != " ":            word = word + s[i]        else:            words.append(word)            word = ""         i = i + 1    words.append(word)    return words  text = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later."puncRemovedText = remove_punctuation(text)lowerText = convertToLower(puncRemovedText) tokenizedText = tokenize(lowerText)print(tokenizedText)import nltk # Define input texttext = "Hello, NLP world!! In this example, we are going to do the basics of Text processing which will be used later." # sentence segmentation - removal of punctuations and converting to lowercasesentences = nltk.sent_tokenize(text)puncRemovedText = remove_punctuation(text)lowerText = convertToLower(puncRemovedText) # Tokenize the texttokens = nltk.word_tokenize(lowerText) # Print the tokensprint(tokens)import nltk sentence = "We're going to John's house today."tokens = nltk.word_tokenize(sentence) print(tokens)  ******** Interactive ChatBOTname = input('You: Please enter your name : ') print(f'Chatbot: Hello {name}\n Chatbot: My name is Chatbot') age = input('You: Please enter your age : ') print(f'Chatbot: Okay, so you are {age} years old\n Chatbot: I am 5 days old') color = input('You: What is your favourite color : ') print(f'Chatbot: Okay, so your favourite color is {color} \n Chatbot: My favourite color is Red') player = input('You: Who is your favourite player : ') print(f'Chatbot: Okay, so your favourite player is {player} \n Chatbot: My favourite player is Kohli')   

Self-Paced View Details

PHP Essentials

A basic PHP essentials course typically covers the foundational concepts and skills required to start working with PHP for web development

Self-Paced View Details

OpenStack Training

Open Stack

24
View Details