Data Scientist Must Know Questions

studied byStudied by 6 people
5.0(1)
Get a hint
Hint

Can you provide an example of how you would use Pandas to clean and preprocess a large dataset for analysis in Amazon's data ecosystem?

1 / 34

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

35 Terms

1

Can you provide an example of how you would use Pandas to clean and preprocess a large dataset for analysis in Amazon's data ecosystem?

  1. Remove duplicates.

  2. Handle missing values (imputation or removal).

  3. Convert data types as needed.

  4. Standardize column names.

  5. Save the cleaned dataset for analysis.

New cards
2

How would you handle missing data in a dataset using Python, and why is it important in the context of Business Intelligence?

Removing rows with missing data
Imputation: Imputation involves replacing missing values with substituted values. Common methods for imputation include replacing missing numerical values with the mean, median, or mode of the column, or using more advanced techniques like interpolation or machine learning algorithms to predict missing values based on other features.
Flagging missing values

Handling missing data is important in the context of Business Intelligence for several reasons:

  1. Maintaining Data Quality

  2. Preserving Sample Size

  3. Improving Model Performance

  4. Meeting Reporting Requirements

New cards
3

Describe a situation where you had to optimize Python code for performance. What techniques did you use, and how would you apply them to Amazon's BI tasks?

  1. Use Efficient Data Structures: I replaced lists with sets or dictionaries where appropriate, as sets and dictionaries offer faster lookup times for certain operations. This reduced the time complexity of certain operations in the code.

    Application to Amazon's BI tasks: In Amazon's BI tasks, using efficient data structures like dictionaries or sets can improve the performance of tasks involving data manipulation and aggregation, such as grouping data by categories or performing lookups on large datasets.

  2. Vectorization: I utilized vectorized operations wherever possible, especially when working with numerical data using libraries like NumPy and Pandas. Vectorized operations are optimized for speed and can significantly reduce processing time compared to traditional iterative approaches.

    Application to Amazon's BI tasks: Vectorization can be applied to tasks involving data transformation, filtering, and aggregation in Amazon's BI tasks. For example, when performing calculations on large datasets, using vectorized operations in Pandas can greatly improve processing speed.

New cards
4

How does a logistic regression model know what the coefficients are?

In logistic regression, the coefficients (also known as weights or parameters) are estimated during the training process using an optimization algorithm such as gradient descent. The goal of logistic regression is to learn the relationship between the independent variables (features) and the dependent variable (target) by estimating the coefficients that minimize the error between the predicted values and the actual values.

  • In logistic regression, coefficients (weights) are learned during training.

  • Initially, coefficients are set to random values.

  • The model computes predictions using these coefficients.

  • A loss function measures the error between predictions and actual labels.

  • An optimization algorithm (e.g., gradient descent) updates coefficients to minimize the loss.

  • This process iterates until coefficients converge.

  • The final coefficients represent the learned relationship between features and target.

New cards
5


Difference between convex and non-convex cost function; what does it mean when a cost function is non-convex?

  • Convex cost functions have a curve that does not curve upwards.

  • They have a single global minimum.

  • Optimization is relatively straightforward.

  • Non-convex cost functions have regions where the curve curves upwards.

  • They may have multiple local minima.

  • Optimization is more challenging and may require advanced techniques.

New cards
6

What is a cost function?

function that measures the error between the predicted and actual values. The goal is to minimize this error to improve the performance of the model.

New cards
7

Is random weight assignment better than assigning same weights to the units in the hidden layer?

  • Random weight assignment is preferred over assigning the same weights to units in the hidden layer.

  • It helps break symmetry in the network and encourages each unit to learn different features.

  • Random initialization promotes exploration of the solution space and improves generalization.

  • Overall, it leads to better learning dynamics and improved model performance.

New cards
8

Given a bar plot and imagine you are pouring water from the top, how to qualify how much water can be kept in the bar chart?

New cards
9

What is Overfitting?

  • Definition: Occurs when a model learns the training data too well, capturing noise and random fluctuations rather than underlying patterns.

  • Characteristics: High training accuracy, poor test accuracy; complexity; large generalization gap; high variance; sensitivity to noise; risk of memorization.

  • Mitigation Techniques: Simplifying the model, regularization, cross-validation, feature selection, early stopping, data augmentation.

New cards
10

Why is gradient checking important?

  • Definition: Gradient checking is a technique used in machine learning to verify the correctness of gradient computation by comparing analytically computed gradients with numerically estimated gradients.

  • Importance:

    • Validates gradient computation process.

    • Helps debug implementation errors.

    • Prevents training instabilities.

    • Enhances confidence in model training.

New cards
11

Gradient Computation

  • Definition: Gradient computation refers to the process of calculating the gradient, which is a vector containing the partial derivatives of a function with respect to its parameters or input variables.

  • Importance: Gradients are used in optimization algorithms such as gradient descent to update model parameters iteratively, minimizing the loss function and improving the performance of machine learning models.

New cards
12

Decision Tree

  • Description: A decision tree is a tree-like structure where internal nodes represent features, branches represent decision rules, and leaf nodes represent the outcomes.

  • Advantages: Easy to interpret and visualize, can handle both numerical and categorical data, implicitly performs feature selection.

  • Disadvantages: Prone to overfitting, sensitive to small variations in the data, may create biased trees if some classes dominate.

New cards
13

Support Vector Machine (SVM)

  • Description: SVM is a supervised learning algorithm that finds the hyperplane that best separates classes in feature space.

  • Advantages: Effective in high-dimensional spaces, memory efficient, versatile due to different kernel functions, effective in cases where the number of dimensions exceeds the number of samples.

  • Disadvantages: Less effective on large datasets, sensitive to noise and overlapping classes, complex and difficult to interpret.

New cards
14

Random Forest

  • Description: Random forest is an ensemble learning method that constructs a multitude of decision trees during training and outputs the mode of the classes (classification) or the mean prediction (regression) of the individual trees.

  • Advantages: Reduces overfitting by averaging multiple decision trees, handles missing values and maintains accuracy when a large proportion of data is missing.

  • Disadvantages: Less interpretable compared to individual decision trees, may require more computational resources due to the ensemble of trees.

New cards
15

Boosting

  • Description: Boosting is an ensemble learning technique that combines multiple weak learners to create a strong learner. It iteratively trains models, giving more weight to instances that were previously misclassified.

  • Advantages: Can improve model accuracy by focusing on hard-to-classify instances, less prone to overfitting compared to bagging methods.

  • Disadvantages: Sensitive to noisy data and outliers, may be computationally expensive and require careful tuning of hyperparameters.

New cards
16

Describe the criterion for a particular model selection

  • Description: The criterion for model selection refers to the metrics or criteria used to evaluate and choose the best model among different candidates.

  • Importance: Helps identify the most suitable model for the given problem based on factors such as accuracy, interpretability, computational efficiency, and generalization performance.

  • Examples: Common criteria include accuracy, precision, recall, F1-score, AUC-ROC, computational complexity, interpretability, and ease of implementation.

New cards
17

Importance of Dimension Reduction

  • Description: Dimension reduction refers to the process of reducing the number of input variables or features in a dataset.

  • Importance: Dimension reduction is important because it:

    1. Reduces Overfitting: Helps mitigate the curse of dimensionality and reduces the risk of overfitting, especially in high-dimensional datasets.

    2. Improves Computational Efficiency: Reduces computational complexity and training time, especially for algorithms that are sensitive to the number of features.

    3. Enhances Interpretability: Simplifies the model and makes it easier to interpret, visualize, and understand the underlying patterns in the data.

    4. Facilitates Data Exploration: Helps identify important features and relationships in the data, leading to better insights and decision-making.

  • Techniques: Common dimension reduction techniques include Principal Component Analysis (PCA), Singular Value Decomposition (SVD), t-distributed Stochastic Neighbor Embedding (t-SNE), and feature selection methods such as Recursive Feature Elimination (RFE) and Lasso Regression.

New cards
18

What are the assumptions for logistic and linear regression?

Assumptions for Linear Regression:

  1. Linearity: There exists a linear relationship between the independent variables and the dependent variable.

  2. Independence: The residuals (errors) are independent of each other.

  3. Homoscedasticity: The variance of the residuals is constant across all levels of the independent variables (i.e., the residuals have constant variance).

  4. Normality: The residuals are normally distributed (i.e., follow a Gaussian distribution).

Assumptions for Logistic Regression:

  1. Linearity of Log Odds: There exists a linear relationship between the log odds of the outcome and the independent variables.

  2. Independence: The observations are independent of each other.

  3. Absence of Multicollinearity: The independent variables are not highly correlated with each other.

  4. Large Sample Size: Logistic regression performs better with a large sample size.

  5. Binary or Ordinal Outcome: Logistic regression is suitable for binary or ordinal outcomes.

New cards
19

In what situation do we run an A/B test?

A/B test is usually conducted to examine the success of a newly launched feature in a product or a change in the existing product. It leads to increased user engagement and conversion rate, and minimizes bounce rate. It is also known as split testing or bucket testing.

New cards
20

Highlight one difference between one-tailed and two-tailed tests.

One-tailed tests have one critical region, and on the other hand, two-tailed tests have two critical regions. In other words, one-tailed tests determine the possibility of an effect in one direction, and two-tailed tests do the same in two directions- positive and negative.

New cards
21

What do you know about a null and alternative hypothesis?

A null hypothesis is a statement that suggests there is no relationship between the two variable values. An alternative hypothesis simply highlights the relationship between two variable values.

New cards
22

For how much time should you run an A/B test?

One can estimate the run time by taking the number of daily visitors and the number of variations into consideration.

Example: If the traffic on your website every day is, say, 40K the sample size is 200K, and there are two variants, then you should run the A/B test for 100 days = 200K/40K*2 

Furthermore, one should run the A/B test for at least 14 days in order to compensate for the variations due to weekdays and weekends.

New cards
23

Explain Type I and Type II errors.

One observes a Type I error when they reject a null hypothesis that is indeed true for the population. It suggests that the test and control could not be distinguished but we proposed the opposite.

On the other hand, Type II error occurs when one rejects a null hypothesis that is indeed false for the population.It suggests that the test and control could be distinguished from each other but they could not identify the differences.

New cards
24

What do you mean by p-value? Explain it in simple terms.

The p-value in A/B testing is a measure of statistical significance. It represents the probability that a test statistic (calculated from the data) will be as extreme as what was observed in the experiment if the null hypothesis is true.

The null hypothesis suggests that there is no significant difference between the groups being compared. The p-value is compared to the chosen significance level (alpha) to decide whether to reject the null hypothesis. A small p-value (typically < 0.05) indicates that the difference is statistically significant.

New cards
25

Is it possible to keep the treated samples different from the assigned samples?

Yes, it is possible to keep the treated samples different from the assigned samples. An assigned sample is one that becomes a part of the campaign. A treated sample on the other hand is a subset of the assigned sample on the basis of certain conditions.

New cards
26

What do you understand by alpha and beta in terms of A/B testing?

Alpha denoted the probability of type-1 error. It is called the significance level. And, beta denotes the probability of type-II error.

New cards
27

What is the relationship between covariance and correlation?

The normalized version of covariance is called correlation

New cards
28

Describe the number one step in implementing an A/B test.

  1. Prepare description for null and alternative hypotheses.

  2. Identify guardrail metric and north star metric.

  3. Estimate the sample size or least detectable effect for the north star metric.

  4. Prepare a blueprint for testing.

  5. Collaborate with instrumentation/engineering teams to appropriate tags in place.

  6. Ensure the tags are working well.

  7. Seek approval of the testing plan from Product Managers and engineers.

New cards
29

What is the difference between t-test and Z-test?

  • t-test: This test is used when the sample size is relatively small (typically less than 30 observations per group). It does not assume that the sample follows a normal distribution, making it suitable for smaller samples with potentially non-normal data distributions.

  • Z-test: The Z-test is employed when the sample size is large (usually greater than 30) and the Central Limit Theorem can be applied to assume a normal distribution of the sample. It is used for larger samples where normality can be assumed.

New cards
30

What are the parameters that need to be specified in A/B testing?

New cards
31
New cards
32
New cards
33
New cards
34
New cards
35
New cards

Explore top notes

note Note
studied byStudied by 54 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 1 person
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 15 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 3 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 18 people
Updated ... ago
4.3 Stars(3)
note Note
studied byStudied by 24 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 10 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 31 people
Updated ... ago
5.0 Stars(1)

Explore top flashcards

flashcards Flashcard24 terms
studied byStudied by 39 people
Updated ... ago
4.0 Stars(5)
flashcards Flashcard29 terms
studied byStudied by 2 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard119 terms
studied byStudied by 7 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard114 terms
studied byStudied by 16 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard57 terms
studied byStudied by 97 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard28 terms
studied byStudied by 11 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard179 terms
studied byStudied by 2 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard43 terms
studied byStudied by 734 people
Updated ... ago
5.0 Stars(5)