MCQsExam.com
- A-The algorithm assigns the majority class label to the new data point
- B-The algorithm assigns the class label of the closest neighbor to the new data point
- C-The algorithm assigns a random class label to the new data point
- D-The algorithm assigns a weighted class label based on the distance to the neighbors
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Robustness to outliers and noisy data
- B-Low computational complexity during training
- C-Ability to handle large datasets
- D-Interpretability of the model
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-To select the optimal value of K based on the training data
- B-To evaluate the performance of the model on unseen data
- C-To reduce the variance of the model
- D-To prevent overfitting of the model
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-The increase in computational complexity with higher dimensions
- B-The decrease in accuracy with higher dimensions
- C-The increase in model flexibility with higher dimensions
- D-The decrease in training time with higher dimensions
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-O(n)
- B-O(log n)
- C-O(n log n)
- D-O(n^2)
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-It has no impact on the performance
- B-It affects the speed of the algorithm
- C-It affects the accuracy of the algorithm
- D-It affects the complexity of the decision boundary
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Training the model
- B-Calculating distances between data points
- C-Updating model parameters iteratively
- D-Selecting the value of K
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-The model becomes more prone to underfitting
- B-The model becomes more prone to overfitting
- C-The decision boundary becomes smoother
- D-The decision boundary becomes more complex
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-High computational complexity during training
- B-Susceptibility to overfitting
- C-Sensitivity to outliers and irrelevant features
- D-Difficulty in handling high-dimensional data
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Higher values of K lead to a smoother decision boundary
- B-Higher values of K lead to a more complex decision boundary
- C-Lower values of K lead to a smoother decision boundary
- D-Lower values of K lead to a more complex decision boundary
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Euclidean distance
- B-Cosine similarity
- C-Manhattan distance
- D-Minkowski distance
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Kernel
- B-K-means
- C-Number of clusters
- D-Number of nearest neighbors
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-By fitting a decision boundary to the data
- B-By calculating the mean of the nearest neighbors' labels
- C-By clustering data points into groups
- D-By reducing the dimensionality of the data
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Classification
- B-Regression
- C-Clustering
- D-Dimensionality reduction
- Posted By: MCQSEXAM
- Data Science / K-Nearest Neighbors (KNN)
-
More about this MCQ
- A-Lower computational complexity
- B-Faster training time
- C-Ability to capture complex hierarchical patterns in the data
- D-Higher interpretability
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Transferring knowledge from one Neural Network to another
- B-Transferring pre-trained models to new tasks
- C-Transferring data between different layers of a Neural Network
- D-Transferring weights between different neurons
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Batch normalization
- B-Dropout regularization
- C-Gradient clipping
- D-Early stopping
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Controlling the rate of weight updates during training
- B-Controlling the size of the input data
- C-Controlling the number of neurons in each layer
- D-Controlling the number of training epochs
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Decreased model capacity
- B-Increased computational complexity
- C-Improved generalization performance
- D-Decreased risk of overfitting
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-To introduce non-linearity into the model
- B-To control the learning rate during training
- C-To normalize the input data
- D-To regularize the model and prevent overfitting
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Difficulty in interpreting model predictions
- B-Prone to underfitting due to model complexity
- C-Limited capacity to capture complex patterns in the data
- D-Fast training time compared to shallow models
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-To reduce the dimensionality of feature maps
- B-To increase the spatial resolution of feature maps
- C-To introduce non-linearity to the model
- D-To regularize the model and prevent overfitting
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Natural language processing
- B-Image recognition and classification
- C-Time series forecasting
- D-Reinforcement learning
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-The weights of the model become too large during training
- B-The learning rate decreases too rapidly during training
- C-The gradients become extremely small, hindering learning in deep networks
- D-The model becomes too complex to converge to a solution
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Reducing the computational complexity of training
- B-Reducing overfitting by adding noise to the input data
- C-Normalizing the input data to speed up training
- D-Normalizing the activations of intermediate layers to stabilize training
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-The model learns to generalize well to unseen data
- B-The model learns to memorize the training data without generalizing
- C-The model stops learning before reaching optimal performance
- D-The model becomes too simple to capture complex patterns in the data
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Learning rate
- B-Activation function
- C-Number of hidden layers
- D-Input data
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Dropping out randomly selected neurons during training
- B-Reducing the learning rate over time during training
- C-Adding noise to the input data during training
- D-Applying L1 or L2 regularization to the weights of the model
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Adjusting the learning rate during training
- B-Propagating errors backward to update the weights
- C-Initializing the weights of the Neural Network
- D-Regularizing the model to prevent overfitting
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Regularizing the model
- B-Evaluating the performance of the model
- C-Updating the weights of the model
- D-Introducing non-linearity to the model
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Computes the output of the Neural Network
- B-Computes the loss function of the Neural Network
- C-Introduces non-linearity to the model
- D-Updates the weights of the Neural Network
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-A layer in a Neural Network
- B-The activation function of a Neural Network
- C-The basic building block of a Neural Network
- D-The loss function of a Neural Network
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-Clustering
- B-Classification and regression
- C-Dimensionality reduction
- D-Association rule learning
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-ReLU (Rectified Linear Unit)
- B-Sigmoid
- C-Tanh (Hyperbolic Tangent)
- D-Exponential
- Posted By: MCQSEXAM
- Data Science / Neural Networks
-
More about this MCQ
- A-High interpretability
- B-Robustness to outliers
- C-Low computational cost
- D-Ability to handle non-linear relationships
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To transform features into a higher-dimensional space
- B-To regularize the model
- C-To decrease the number of features
- D-To speed up training
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Larger margin, more misclassifications
- B-Smaller margin, fewer misclassifications
- C-Larger margin, fewer misclassifications
- D-Smaller margin, more misclassifications
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To introduce a penalty for misclassifications
- B-To increase the margin between classes
- C-To decrease the margin between classes
- D-To regularize the model
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Data points that lie on the decision boundary
- B-Data points that are correctly classified
- C-Data points that are closest to the decision boundary
- D-Data points that are farthest from the decision boundary
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-SVM maximizes the margin between classes, while logistic regression minimizes classification error
- B-SVM is a parametric model, while logistic regression is a non-parametric model
- C-SVM is a discriminative model, while logistic regression is a generative model
- D-SVM is a linear model, while logistic regression is a non-linear model
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Smoother decision boundary
- B-Sharper decision boundary
- C-Larger margin
- D-Smaller margin
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To increase the dimensionality of the feature space
- B-To decrease the dimensionality of the feature space
- C-To transform non-linearly separable data into linearly separable data
- D-To regularize the SVM model
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Linear kernel
- B-Polynomial kernel
- C-Exponential kernel
- D-Sigmoid kernel
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To maximize the margin between classes
- B-To minimize the number of support vectors
- C-To minimize the classification error
- D-To maximize the sparsity of the solution
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Hard-margin SVM ignores misclassified points, while soft-margin SVM penalizes them
- B-Hard-margin SVM allows for misclassification, while soft-margin SVM does not
- C-Hard-margin SVM has a larger margin, while soft-margin SVM has a smaller margin
- D-Hard-margin SVM uses a linear kernel, while soft-margin SVM uses a non-linear kernel
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Gaussian kernel is faster to compute
- B-Gaussian kernel handles non-linear relationships between features better
- C-Gaussian kernel is less prone to overfitting
- D-Gaussian kernel has a higher sparsity level
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To transform non-linearly separable data into linearly separable data
- B-To reduce the dimensionality of the feature space
- C-To regularize the SVM model
- D-To speed up the training process
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-High computational cost
- B-Low predictive accuracy
- C-Sensitivity to feature scaling
- D-Inability to handle non-linear relationships
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-The margin is increased
- B-The margin is decreased
- C-The model becomes more sensitive to outliers
- D-The model becomes less sensitive to outliers
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-To control the width of the margin
- B-To control the complexity of the model
- C-To control the degree of polynomial kernel
- D-To control the influence of misclassified points
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Data points that are closest to the decision boundary
- B-Data points that lie on the decision boundary
- C-Data points that are correctly classified
- D-Data points that are misclassified
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Linear kernel
- B-Polynomial kernel
- C-Gaussian kernel
- D-Sigmoid kernel
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Classification
- B-Regression
- C-Clustering
- D-Dimensionality Reduction
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-The distance between support vectors
- B-The distance between the decision boundary and the support vectors
- C-The number of support vectors
- D-The width of the decision boundary
- Posted By: MCQSEXAM
- Data Science / Support Vector Machines (SVM)
-
More about this MCQ
- A-Feature importance scores sum up to 1
- B-Feature importance scores can be negative
- C-Feature importance scores represent the predictive power of features
- D-Feature importance scores are not affected by the number of trees in the forest
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Increases model variance
- B-Decreases model variance
- C-Increases model bias
- D-Decreases model bias
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Bagging trains each model independently, while boosting trains models sequentially
- B-Bagging combines multiple weak learners to create a strong learner
- C-Bagging reduces variance, while boosting reduces bias
- D-Bagging uses random subsets of the data for training
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-The model becomes more prone to overfitting
- B-The model becomes more prone to underfitting
- C-The model becomes more computationally expensive
- D-The model's performance remains unaffected
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Grid search
- B-Random search
- C-Manual tuning
- D-All of the above
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-To reduce the computational complexity of training
- B-To increase the number of features used in each tree
- C-To reduce bias in the model
- D-To create diverse datasets for training each tree
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Random Forests are computationally expensive
- B-Random Forests are prone to underfitting
- C-Random Forests are less interpretable
- D-Random Forests have higher bias
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-"gini"
- B-"entropy"
- C-"mse"
- D-"mae"
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Random initialization of parameters
- B-Random selection of features
- C-Random initialization of weights
- D-Random initialization of centroids
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-n_estimators
- B-max_depth
- C-criterion
- D-min_samples_split
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-It specifies the maximum number of features to consider for splitting at each node
- B-It controls the maximum depth of each individual tree
- C-It determines the minimum number of samples required to split a node
- D-It sets the number of trees in the forest
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Mean decrease in impurity (MDI)
- B-Mean decrease in accuracy (MDA)
- C-Permutation importance
- D-Partial dependence plots
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-To rank features based on their predictive power
- B-To increase the number of features used in each tree
- C-To reduce the computational complexity of training
- D-To remove irrelevant features from the dataset
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Error calculated on the training data
- B-Error calculated on the validation data
- C-Error calculated on data not used during training
- D-Error calculated on data not used during testing
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-By taking the mode of predictions from all trees
- B-By taking the mean of predictions from all trees
- C-By averaging the probabilities from all trees
- D-By summing the predictions from all trees
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-To reduce bias
- B-To reduce variance
- C-To speed up training
- D-To increase interpretability
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-By using a fixed set of features for each tree
- B-By training each tree on the entire dataset
- C-By selecting a random subset of features for each tree
- D-By enforcing a fixed tree depth for each tree
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Nodes and branches
- B-Features and labels
- C-Estimators and trees
- D-Leaves and roots
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-Random Forests are easier to interpret
- B-Random Forests are less prone to overfitting
- C-Random Forests have faster training time
- D-Random Forests require fewer hyperparameters to tune
- Posted By: MCQSEXAM
- Data Science / Random Forests
-
More about this MCQ
- A-It assigns the missing value to the most common value in the dataset
- B-It assigns the missing value randomly
- C-It uses surrogate splits to handle missing values
- D-It removes the samples with missing values from the dataset
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-The number of nodes in the tree
- B-The number of branches in the tree
- C-The maximum number of splits from the root node to a leaf node
- D-The maximum number of samples in a leaf node
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-Prone to overfitting
- B-Require scaling of features
- C-Can only handle numerical data
- D-Not interpretable
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-Gini impurity
- B-Information gain
- C-Mean squared error
- D-Chi-square test
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-A measure of impurity in a set of examples
- B-The rate of information gain
- C-The number of decision nodes in the tree
- D-The depth of the tree
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-The last decision node
- B-The feature that best splits the data
- C-The leaf node with the highest information gain
- D-The output prediction
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-Trimming branches to reduce model complexity and overfitting
- B-Adding more layers to increase model capacity
- C-Increasing the depth of the tree to capture more details
- D-Removing outliers from the dataset
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-K-means
- B-Gradient Descent
- C-ID3/C4.5, CART, or Gini impurity
- D-Support Vector Machine (SVM)
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-Nodes and branches
- B-Features and labels
- C-Weights and biases
- D-Matrices and vectors
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-Regression
- B-Clustering
- C-Classification
- D-Dimensionality Reduction
- Posted By: MCQSEXAM
- Data Science / Decision Trees
-
More about this MCQ
- A-The ratio of the probabilities of the two classes
- B-The logarithm of the odds of the positive class
- C-The logarithm of the odds of the negative class
- D-The ratio of the odds of the positive class to the odds of the negative class
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Undersampling the majority class
- B-Oversampling the minority class
- C-Using synthetic minority oversampling technique (SMOTE)
- D-Assigning equal weights to each class during training
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-To determine the learning rate
- B-To define the decision boundary between classes
- C-To set the number of iterations for training
- D-To adjust the sensitivity of the model
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Gradient Descent
- B-K-means
- C-Decision Tree
- D-Support Vector Machine (SVM)
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Values between 0 and 1
- B-Values between -1 and 1
- C-Values between 0 and ∞
- D-Values between -∞ and +∞
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-The line that separates the data points into two classes
- B-The line with the maximum slope
- C-The line that minimizes the sum of squared errors
- D-The line that intersects the most data points
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Mean Squared Error (MSE)
- B-R-squared (R2)
- C-Accuracy, Precision, Recall, F1-score
- D-Area Under the Receiver Operating Characteristic (ROC) Curve (AUC-ROC)
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Gaussian distribution
- B-Poisson distribution
- C-Binomial distribution
- D-Exponential distribution
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-ReLU
- B-Sigmoid
- C-Tanh
- D-Softmax
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-Predicting continuous numerical values
- B-Classifying input data into discrete categories
- C-Clustering similar data points together
- D-Extracting features from raw data
- Posted By: MCQSEXAM
- Data Science / Logistic Regression
-
More about this MCQ
- A-To increase bias and reduce variance
- B-To decrease bias and increase variance
- C-To penalize large coefficients and reduce overfitting
- D-To penalize small coefficients and increase overfitting
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ
- A-Ridge Regression
- B-Lasso Regression
- C-Elastic Net Regression
- D-Decision Tree Regression
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ
- A-The presence of outliers in the data
- B-The relationship between the independent and dependent variables is not linear
- C-The presence of strong correlations among independent variables
- D-The assumption that the residuals are normally distributed
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ
- A-The intercept of the regression line
- B-The change in the dependent variable for a one-unit change in the independent variable
- C-The average value of the dependent variable
- D-The standard deviation of the dependent variable
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ
- A-The strength of the relationship between independent and dependent variables
- B-The slope of the regression line
- C-The proportion of variance in the dependent variable explained by the independent variables
- D-The intercept of the regression line
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ
- A-Gradient Descent
- B-K-means
- C-Decision Tree
- D-Support Vector Machine (SVM)
- Posted By: MCQSEXAM
- Data Science / Linear Regression
-
More about this MCQ