IST314 Midterm

0.0(0)
studied byStudied by 16 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/122

flashcard set

Earn XP

Description and Tags

2/26 Midterm Study Guide

Last updated 6:44 PM on 2/20/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

123 Terms

1
New cards

Members of the Dartmouth summer research project on artificial intelligence in 1956

Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, John McCarthy, Claude Shannon

2
New cards

algorithm

a set of instructions or actions that are followed in a sequence that solve a problem or perform a task

3
New cards

perceptron

  • to combine weighted inputs and apply an activation function to produce an output

  • adds up inputs. if resulting sum is equal to or greater than the perceptron’s threshold, the perceptron fires (=1). otherwise, the perceptron doesn’t fire (=0).

  • the foundation for neural networks based on the human brain

  • mimcs the neuron

4
New cards

hallucination

when AI “perceives patterns or objects that do not exist or are imperceptible to humans, which create outputs that are inaccurate or nonsensical” but presents it confidently

5
New cards

who coined the term “artificial intelligence”?

John McCarthy

6
New cards

cybernetics

control and communication in animals and machines with focus on feedback and system dynamics

7
New cards

who led cybernetics

norbert wiener

8
New cards

scholars’ goal in 1956

  • to establish a research program in artificial intelligence

  • to develop a genuine thinking machine that mirrored human thinking, i.e., artificial general intelligence

9
New cards

marvin minsky

founded MIT’s AI lab focused on symbolic approaches to AI

10
New cards

according to minsky, intelligence is

a suitcase term with many different meanings

11
New cards

minksy’s focus was on

human symbolic manipulation and ways to mimic that via computers

12
New cards

minsky’s central features of intelligence

  • search

  • pattern recognition

  • learning

  • planning

  • inductive reasoning

13
New cards

who invented the perceptron

in the late 1950s by psychologist Frank Rosenblatt`

14
New cards

symbolic AI

  • uses words/phrases, i.e., symbols along with rules, which are combined and processed by the program to perform an assigned task

  • rule-based

  • expert systems

15
New cards

who invented symbolic AI

allan newell and herbert simon

16
New cards

first symbolic AI systems

  • general problem solver

    • program coded rules for solving logic problems based on human reasoning

    • foundation of “expert systems” rule-based programming

  • NSS chess program

    • used algorithms that searched for good moves and “heuristics” of known chess strategies

    • advanced early “decision tree’ algorithms where improbable options are “pruned”

17
New cards

what is a hidden layer in a multilayer neural network

perceptrons that are neither input nor output units

18
New cards

a connectionist network is

the weighted connections between units

19
New cards

back propagation

when a neural network pushes errors in the output back into the network to find which layer caused the errors

20
New cards

explainable AI helps AI to be more

fair, accountable, and transparent (FAT)

21
New cards

example of sub-symbolic system

multilayer neural networks

22
New cards

syb-symbolic AI systems

  • neural networks

  • reinforcement learning

23
New cards

AI winter

mid 1970s-90s b/c research in symbolic systems did not achieve the promises of general AI

24
New cards

who invented connectionism

UCSD David Rumelhart and James McClelland

25
New cards

connectionist network

the key to generative AI lays in the weighted connections between units

26
New cards

the key to general AI

a computational architecture that’s like the brain and can learn from data

27
New cards

multilayer networks = deep neural networks

networks with more than 1 hidden layer

28
New cards

threshold = bias

the measure of how easy it is to get a unit to output a 1. the lower the bias, the harder to fire

29
New cards

classification

a neural network’s prediction

30
New cards

forward propagation

each unit passes its sum of weights from the input or prior hidden unit to the next hidden unit or the output

31
New cards

“learning”

modifying the weights to reduce error and increase prediction accuracy

32
New cards

black box problem

when machines “learn”, we don’t usually know why they provide the output they do

33
New cards

explainable AI

in the 1980s there was a push for scientifically grounded principles to make AI systems “self-explaining”

34
New cards

core ethical principles of AI

  • fairness

  • accountability

  • transparency

  • interpretability

  • trustworthiness

35
New cards

Turing Test

a challenge that measures whether a computer can chat with a human judge such that the human judge thinks they are talking with another human

36
New cards

strong AI = general AI = artificial general intelligence = superintelligence

an AI system that truly understands and has actual human-like intelligence and can do a variety of tasks

37
New cards

exponential growth as it relates to computer

computers get faster and faster every year with advances in computer chips and processing

38
New cards

the singularity

  • by 2045, computers will surpass human intelligence

  • humans and AI will merge to form an immortal superintelligence

39
New cards

prompt engineering

the practice of designing inputs to AI to get the desired output

40
New cards

Alan Turing

codebreaker in WWII who wrote a paper about machine intelligence in 1950 saying the machine could be considered intelligent if it could convince human judges that they were talking with another human

41
New cards

weak AI = narrow AI

can only do a specific task (i.e. draw images or drive a car)

42
New cards

John Searle

philosopher who introduced the concepts of strong and weak AI

  • weak AI: AI that simulates a human mind but does not have one

  • strong AI: AI that actually has a mind

43
New cards

data centers

  • where compute happens

  • need electricity to run computers to run calculations to train models and then calculate responses to prompts

44
New cards

prompt engineering

  • careful designing of instructions to generative AI to get the output desired

  • metacognitive process: need to think about what you want as output; relevant factors (audience, tone); complexity; etc

45
New cards

semantic space

a word’s meaning is understood by its occurrence with other words

46
New cards

natural language processing

getting computers to deal with robot language

47
New cards

for computers to process human language, they first have to

convert words to numbers

48
New cards

1 key advance of BERT

has an “attention mechanism” that allows the model to more accurately represent the meaning of a word in a sentence

49
New cards

word2vec

  • a neural network model that represents words as vectors

  • word embedding, i.e., helps determine meaning of words via context

  • captures word meanings, similarity to other words, relationships with surrounding text

50
New cards

information retrieval = question-answering systems are thanks to

the presence of large amounts of writing on the internet

51
New cards

current AI revolution of chatbots and LLMs is thanks to

  • neural network computing, especially recurrent neural networks

    • Paul Werbos 1980 Long Term, Short Term Memory; Hochreiter & Schmidhuber 1997

  • the presence of large amounts of writing on the internet

  • word2vec

52
New cards

recurrent neural networks (RNN)

multilayer neural networks that use backpropagation to “remember” inputs. language is sequenced, so there needs to be memory

53
New cards

feedforward neural networks (FFNN)

don’t “remember” because they don’t use back propagation

54
New cards

who created Word2Vec (word to vector)

Google research team including Geoffrey Hinton and Ilya Sutskever in 2013

55
New cards

tokenization

turning words in a corpus into numbers; part of word embedding

56
New cards

vectorizing

multi-dimensional numeric representation of words; synonyms and associated words mapped nearby in vector space

57
New cards

BERT

  • bidirectional encoder representations from transformers

  • trained on Google Books and Wikipedia

  • implemented in Google Search in 2019

58
New cards

who invented BERT?

the Google “Brain” team led by Jacob Devlin in 2018

59
New cards

breakthrough of BERT

  • RNNs only use backpropagation while transformers can use both left and right contexts across all units (i.e., the hidden layers)

  • transformers contextualize a given token within a “context window” where an “attention mechanism” amplifies important tokens and diminishes less important tokens = encoders

  • encoders track relationships between words in sentences and the sequences of words

  • also masks words to test-predict accuracy of relationships between words

60
New cards

tensor processing units (TPU)

special computer chips designed to accelerate complex calculations for machine learning and NLP

61
New cards

OpenAI advancement is GPT

  • built on transformers architecture

  • generative pre-trained transformer

  • GPT is primarily a decoder focused on next word prediction

  • “generative” — trained to create new content

  • true LLM

62
New cards

mental model

the model that helps humans understand how the world works

63
New cards

shots

references or examples in prompt engineering

64
New cards

chain of thought prompting

  • you have a complex task in mind and you coach the AI through the steps of that task

  • guides an AI model to solve complex problems by breaking them down into a series of logical, intermediate steps before arriving at a final answer

65
New cards

what computational approach did watson use to win jeopardy

question answering

66
New cards

who developed Watson

IBM

67
New cards

adversarial attacks

  • the ability to trick a computer into outputting the wrong answer

  • intentionally tricking a computer system to output an incorrect answer without human detection

68
New cards

answer extraction = information retrieval = question-answering system = search

questions get turned into a search query and extracted from a large database of information

69
New cards

information retrieval

the computer science term for the engineering behind QA systems

70
New cards

1980s-2000s difference between IBM and Google competition

  • competed for technical dominance in information retrieval and AI

  • IBM focused on QA challenges

  • Google focused on commercial search

71
New cards

when was IBM founded

1911 as a tabulating company (data processing)

72
New cards

IBM’s AI focus

  • created DeepBlue to compete at chess, beat world chess champion Gary Kasparov in 1997 using a rule-based approach

  • built Watson (named after IBM company founder) and competed on Jeopardy! in 2011 using an information retrieval approach

73
New cards

when was Google founded

1997 by Larry Page and Sergey Brin from Stanford to compete with Yahoo!

74
New cards

Google’s goal vs Yahoo

  • to create a better WWW search experience

  • Yahoo! indexed pages by topics

  • Google developed an algorithm to rank importance of Web pages based on in-links

75
New cards

IBM vs Google

  • IBM hasn’t innovated further beyond Watson’s QA

  • Google pushed AI innovation

    • Word2Vec for NLP

    • tensor processing units and tensor flow software for efficient computations

    • transformers as encoder-decoder architectures for NLP

    • Bard as Google’s first LLM

76
New cards

layers in a convolutional neural network are comprised of

activation maps

77
New cards

how does the brain visually process objects?

  • feed forward flow of information

  • inputs from the eye are decomposed into small units

  • the brain processes those units into key elements and then analyzes them hierarchically

78
New cards

when the brain processes objects in its visual field, it processes the information

in a hierarchical series that starts with edges and ends in recognizing the object

79
New cards

convolution in CNNs are

  • the mathematical calculation used by computers to define features of an object

  • a calculation that multiples the values in a receptive field by its corresponding weight and summing the results

80
New cards

neocognitron

one of the earliest neural networks trained for computer vision

81
New cards

neural networks mimic…

the brain

82
New cards

computer neural networks do what for visual processing

  • break images into small pixels of light and convert them to numbers for computational processing

  • computers create hierarchies for processing visual information for object recognition

83
New cards

what are CNNs trained to do?

to focus on specific aspects of the image to focus on in order to learn how to recognize objects

84
New cards

compute = computation = processing

mathematical calculations computers must run to arrive at their outputs, usually expressed as probabilities

85
New cards

who created LeNet

Yann LaCunn

86
New cards

what does LeNet do

recognize handwriting of numbers

87
New cards

why was ImageNet a breakthrough in image recognition?

it provided a massive dataset of labeled images for training and testing AI

88
New cards

why was Amazon Mechanical Turk important for image recognition?

human workers on that platform were able to categorize millions of images and improve the training data

89
New cards

what is WordNet’s relationship to ImageNet

provides the nouns that were used as the categories to label images

90
New cards

where did Fei Fei Li get the images for ImageNet from

Flickr and Google Image Search

91
New cards

who invented convolutional neural networks?

Yann LeCunn in 1989 demonstrated the ability for computers to decode handwriting

92
New cards

who developed ImageNet?

Fei Fei Li in 2011 with a massive dataset of labeled images

93
New cards

who developed AlexNet

Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever from UToronto

94
New cards

when did AlexNet beat IBM and others in ImageNet image recognition competition

2012

95
New cards

when did Hinton, Sutskever, and Krizhevsky leave UToronto for Google

2013

96
New cards

Fei Fei Li

  • Princeton to Uni of Illinois computer vision researcher

  • 2007-2010, downloaded a billion images from publicly available photo platforms like Flickr 

  • categorized the images on 22,000 categories with Amazon Mechanical Turk

97
New cards

where do ImageNet categories come from

WordNet

98
New cards

who developed WordNet

Miller and Fellbaum at Princeton in 1985 organized 155,000 words into synsets (word-sense pairs)

99
New cards

synsets

categories of words and their attributes (associations)

100
New cards

Clever Hans

  • a story about a horse who seemingly could do math

  • represents learned shortcuts (spurious correlation) in AI: an irrelevant pattern in data that happens to correlate with the right answer will be learned by the AI resulting in incorrect predictions