L6 - Natural Language Processing

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/56

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:00 PM on 4/14/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

57 Terms

1
New cards

What is Natural Language Processing?

The study of analyzing and modeling text data.

2
New cards

Why is text data challenging?

It is unstructured and requires significant preprocessing.

3
New cards

What is unstructured data?

Data that is not organized in a predefined format.

4
New cards

What is alternative data?

Non-traditional data sources used for insights, such as text from social media or reports.

5
New cards

Why is NLP important?

It is widely used in industry for extracting insights from text.

6
New cards

What are common NLP applications?

Sentiment analysis, translation, chatbots, speech recognition, and information organization.

7
New cards

What is sentiment analysis?

The process of determining whether text is positive, negative, or neutral.

8
New cards

Why is sentiment analysis useful?

It can extract opinions and predict behavior from text data.

9
New cards

What is preprocessing in NLP?

The process of cleaning and transforming raw text into usable features.

10
New cards

What is tokenization?

Breaking text into individual words or tokens.

11
New cards

Why remove punctuation and numbers?

They usually add little meaningful information.

12
New cards

Why convert text to lowercase?

To treat words consistently regardless of capitalization.

13
New cards

What are stop words?

Common words that carry little meaning, such as “the” or “and”.

14
New cards

Why remove stop words?

To reduce noise and dimensionality.

15
New cards

What is stemming?

Reducing words to a root form by removing endings.

16
New cards

What is a limitation of stemming?

The resulting root may not be a valid word.

17
New cards

What is lemmatization?

Reducing words to their true root using language rules.

18
New cards

How does lemmatization differ from stemming?

It is more accurate but computationally slower.

19
New cards

Why remove rare words?

They are difficult for models to learn from.

20
New cards

Why remove very common words?

They provide little discriminatory information.

21
New cards

What is an n-gram?

A sequence of n consecutive words.

22
New cards

What is a unigram?

A single word.

23
New cards

What is a bigram?

A pair of consecutive words.

24
New cards

What is a trigram?

A sequence of three consecutive words.

25
New cards

Why use n-grams?

To capture context and word order.

26
New cards

What is a drawback of n-grams?

They increase feature dimensionality.

27
New cards

What is a document?

A single piece of text.

28
New cards

What is a corpus?

A collection of documents.

29
New cards

What is a vocabulary?

The set of all unique words in the corpus.

30
New cards

What is vectorization?

Converting text into numerical representations.

31
New cards

What is the bag-of-words approach?

A method that represents text using word counts without considering order.

32
New cards

What is a document-term matrix?

A matrix where rows are documents and columns are words.

33
New cards

Why is the document-term matrix sparse?

Most words do not appear in most documents.

34
New cards

What is term count?

The number of times a word appears in a document.

35
New cards

What is term frequency?

The normalized count of a word in a document.

36
New cards

Why normalize term frequency?

To account for differences in document length.

37
New cards

What problem arises with common words in search?

They do not help distinguish documents.

38
New cards

What is inverse document frequency?

A measure that downweights words appearing in many documents.

39
New cards

What is TF-IDF?

A weighting scheme combining term frequency and inverse document frequency.

40
New cards

When is TF-IDF high?

When a word is frequent in a document but rare across documents.

41
New cards

Why is TF-IDF useful?

It emphasizes important and distinctive words.

42
New cards

What is a dictionary-based method?

A sentiment approach using predefined word lists.

43
New cards

What is a limitation of dictionary methods?

They do not learn from data and can be inaccurate.

44
New cards

Why are domain-specific dictionaries needed?

General dictionaries may misclassify domain-specific terms.

45
New cards

What are machine learning methods in NLP?

Approaches that use labeled data to learn patterns.

46
New cards

What are common feature representations in ML NLP?

Boolean presence, counts, frequencies, and TF-IDF.

47
New cards

Why is feature standardization difficult in NLP?

The data is sparse.

48
New cards

What is Naïve Bayes?

A probabilistic classifier based on conditional independence assumptions.

49
New cards

What is the key assumption of Naïve Bayes?

Features are independent given the class.

50
New cards

Why is Naïve Bayes useful in NLP?

It performs well with high-dimensional sparse data.

51
New cards

What is the zero-probability problem?

When a word never appears in training data for a class, leading to zero probability.

52
New cards

Why is zero probability problematic?

It can eliminate an entire class prediction.

53
New cards

What is Laplace smoothing?

Adding a small value to counts to avoid zero probabilities.

54
New cards

Why is Laplace smoothing important?

It stabilizes probability estimates and prevents extreme outcomes.

55
New cards

What is the role of training data in NLP models?

It is used to learn patterns between text and outcomes.

56
New cards

What is the role of validation data?

It helps tune model parameters.

57
New cards

What is the role of test data?

It evaluates final model performance.