Unit-IV Recurrent Neural Network Flashcards

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/73

flashcard set

Earn XP

Description and Tags

Flashcards to help review key concepts, facts, and details from a lecture on Unit-IV Recurrent Neural Networks.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

74 Terms

1
New cards

Recurrent Neural Network (RNN)

A type of neural network that saves the output of a layer and feeds it back to the input to predict the layer's output; excels in tasks where the order of sequence matters due to its memory function.

2
New cards

Sequence Importance

Tasks in which the order of sequence is critical; the arrangement of words defines their meaning, evident in time series data where time defines the occurrence of events.

3
New cards

Recurrent Neural Network

A network with access to prior knowledge about data, designed to understand data where sequence matters.

4
New cards

Traditional Neural Networks vs. RNNs

In traditional neural networks, inputs and outputs are independent of each other; in a deeper network, multiple hidden layers are present.

5
New cards

Feed-Forward Network

Information flows only in the forward direction, from input nodes to output nodes, with the help of hidden layer nodes; there are no cycles/loops in the network.

6
New cards

Limitations of Feed-Forward Networks

Cannot be used to handle sequential data, considers only the current state for prediction, cannot memorize previous inputs, and has no memory.

7
New cards

Recurrent Neural Networks (RNNs)

Perform the same task for every element of a sequence, with the output being dependent on previous computations; have a memory that stores information about what has been calculated so far.

8
New cards

Parameter Sharing in RNNs

RNNs reduce the complexity of parameters; the number of layers equals the number of words in a sequence; output from a previous step is used as input to the current step.

9
New cards

RNNs for Sequence Learning

Ensures that the output of the next state depends on the previous state, deals with variable lengths of inputs, and the function executed at each time step is the same.

10
New cards

Parameter Sharing in RNN

Used to show the relation between xi, hi-1 and hi, with weights whh, whx, why, and bias b.

11
New cards

Tasks of RNNs

For each timestamp of the input sequence x, predict output y synchronously, or predict a scalar value of y at the end of the sequence; RNNs can take one or more input vectors and produce output vectors.

12
New cards

Output Calculation in RNNs

Outputs are calculated not only by weights applied on inputs like in a regular NN, but also by a vector representing the content based on prior inputs/outputs.

13
New cards

Training Through RNNs

First, words are transformed into machine-readable vectors; then, the RNN processes the sequence of vectors one by one; the current state becomes the 'ht-1' for the next time step.

14
New cards

RNN Applications

Sentiment classification, video classification, part-of-speech tagging, image captioning, machine translation.

15
New cards

Back Propagation Through Time (BPTT)

Applies backpropagation training algorithm to RNNs with sequence data like a time series to obtain parameters that optimize the cost function.

16
New cards

Vanishing and Exploding Gradients

The gradient is a product of many terms; if all terms are very small, the gradient will vanish; if all terms are very large, the gradient will explode.

17
New cards

Limitations of RNNs

Due to the vanishing gradient problem, RNNs are limited, and there is no fine control over which part of the context needs to be carried forward or forgotten.

18
New cards

Short-Term Memory

RNNs suffer from short-term memory; if a sequence is long enough, they won't carry information from earlier timestamps to later ones.

19
New cards

Long Short-Term Memory (LSTM) Networks

Special kinds of RNNs capable of learning long-term dependencies.

20
New cards

LSTM Structure

LSTM contains interacting layers in different memory blocks called cells.

21
New cards

Gates in LSTM

Input gate, forget gate, output gate - neural networks that decide which information is allowed on the cell state; they learn what information is relevant to keep or forget.

22
New cards

Forget Gate Layer

Involves deciding what information to throw away from the cell state using a sigmoid layer (forget gate layer).

23
New cards

Input Gate Layer

Decides what new information to store in the cell state, done by input gate with a sigmoid layer and a tanh layer.

24
New cards

Updating Cell State

Multiply Ct-1 by ft (forgetting things we decided), and add it to the new candidate values, scaled by how much we decided to update each state value.

25
New cards

Calculation of Output

Based on the cell state; a sigmoid layer (output gate layer) decides what parts of the cell state go to the output.

26
New cards

Gated Recurrent Unit (GRU)

Similar to LSTM but without the memory cell state; uses a hidden state to transfer information and has only two gates: a reset gate and an update gate.

27
New cards

Update Gate

Acts similarly to the forget and input gates of LSTM; decides what information to throw away and what new information to add.

28
New cards

Reset Gate

Decides how much past information to forget

29
New cards

Machine Translation

Includes automated translation software that translates text from one natural language to another.

30
New cards

Models Used for Machine Translation

Sequence-to-sequence models such as Encoder-Decoder and Attention Models.

31
New cards

Encoder Network

Built as an RNN (GRU or LSTM) to process an input sequence, and outputs a vector that represents the input sequence.

32
New cards

Decoder Network

Trained to output the translation one word at a time until it outputs the whole output sequence.

33
New cards

Conditional Language Model

Used for a system that translates English to Hindi; instead of modeling the probability of any sentence, it is shaping the likelihood of the output conditioned on some input sentence.

34
New cards

Beam Search

An algorithm that selects multiple alternatives for an input sequence at each timestamp based on conditional probability; the number of alternatives depends on a parameter called beam width B.

35
New cards

BLEU Score

A score for comparing a candidate translation of text to one or more reference translations; a metric ranging from 0 to 1, where a perfect match results in a lossless score.

36
New cards

Benefits of BLEU Score

Quick, inexpensive to calculate, easy to understand, language-independent, and correlates highly with human evaluation.

37
New cards

BLEU Score - Text String Matches

Based on 'text string matches'.

38
New cards

Overcoming High Precision

Clipped count & modified N-gram precision to overcome this problem

39
New cards

Beam Search Summary

Considers multiple best options based on beam width using conditional probability; higher beam width gives better translation but uses more memory and computational power.

40
New cards

Attention

Implies directing focus at something or concentrating on one or a few things while ignoring others.

41
New cards

Encoder - Decoder RNN/LSTM

Processes the entire input sentence and encodes it into a context vector, which is the last hidden state of LSTM/RNN.

42
New cards

Attention Mechanism

To solve this problem, the attention mechanism is used with Encoder-Decoder DNN/LSTM

43
New cards

Context Vector

States that contains a good summary of the initial hidden state.

44
New cards

Good RNNS

Imply those are good for Understading long sentances.

45
New cards

Attention is Proposed as a solution

encoding the input sequence to a one from vector which to decode each output of time step.length vector.

46
New cards

Using attention Mechanism encoder and decoder model .

With Attention mechanis.

47
New cards

Reinforcement Learning

Computer/software agent learns to perform a task through trial and error interactions with a dynamic environment.

48
New cards

Agent-Environment Interaction

One of many states of the environment, and chooses to take one of many actions to switch from one state to another.

49
New cards

Goal of Reinforcement learning

Smart actions to maximize cumulative awards.

50
New cards

Reinforcement Learning Used in

Video games, computer games.

51
New cards

Action

The move that an Agent makes in a given stae in the environment.

52
New cards

Policy

The strategy the agent employs to perform its actions based on the curretn State.

53
New cards

MDP(Markav decision Process)

A mathematical framework that can solve most Reinforcement learning problems with discrete actions.

54
New cards

MDP Agent Chooses Action

Process is in some state St,then The agent may choose any action (at & A) available in that state.

55
New cards

Markov porcess/ Discrete Weather forecast

Also may known as Discretetine

56
New cards

Rewards

Function rewards an agent for taking the right actiond; Punishes ( with negative records) for wrong actions.

57
New cards

Dsicount Facror

Help to evaluate expeted reward to the advantages/disadvantages of each state

58
New cards

Belman Equation

Helps to solve Markov decision process. or We can say it helps in finding potimal policy & volue function

59
New cards

Bellman Equation/Valve Functions

Each State is associated with a value function(S), also it id equal to (R/St=S)

60
New cards

Bellman Equation Goal

Helps to predict/value of the given state to able to find value/expectively.

61
New cards

In other view Belman Equation

Also can says at each Subsequant what long tern reward

62
New cards

Value Iteraction

Update value of reach state repeadiatly util the volues become stable.

63
New cards

Iteractice Policy

First tale a random policy, then evaluate it and improve it.

64
New cards

Value Iteraction/ Policy Iteration/Compareation

Approach

65
New cards

Function/ value-policy compare

Updates the value functions interavtivelly/ Aleternates bestweem policy evaluations improvements

66
New cards

Actor and critic function

Both the critic and action dunction are Parameterized with neural network.

67
New cards

Q-Learning

It is off-pollcy reinforcement dearning algorithm that seels to find the best action to take given given(state).",

68
New cards

Q-learning consideed off-policy

Learning functions learns of actions that are outside the current policy like raldom action

69
New cards

Main Goal of Q. table -function

The best (max )table we get / also maximize function

70
New cards

Q-learning -table

Also may called action function iteratively improve action

71
New cards

approimate action ( TD)

The basic idea od that is Time Differnence learning

72
New cards

values (also Action values) to

Learning values also helps to improve learning Agents.

73
New cards

SARSA Modified Q (state) (State Action) Learning

Also call (Q) = [Qt, at] [ht,a

74
New cards

SARSAl policy -based

Updates current state actiom,rewards, new state action base.