1/42
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Name each of the 6 dimension of algorithms correctly:
1.Patterns of inclusion
2.Cycles of anticipation
3. The evaluation of relevance
4.The promise of algorithmic objectivity
5.Entanglement with practice
6.The production of calculated publics:
why algorithms must be interrogated?
Gillespie concludes that algorithms are not neutral; they reflect human values and shape public knowledge, making transparency and accountability essential as algorithms increasingly influence society.
1.Patterns of inclusion
the choices behind what makes it into an index in the first place, what is excluded, and how data is made algorithm ready. (Google Search results)
2.Cycles of anticipation
the implications of algorithm providers' attempts to thoroughly know and predict their users, and how the conclusions they draw can matter.
(Spotify's Discover Weekly playlists)
3. The evaluation of relevance
the criteria by which algorithms determine what is Relevant, how those criteria are obscured from us, and how they enact political choices about appropriate and legitimate knowledge. (Facebook News Feed)
4.The promise of algorithmic objectivity
the way the technical character of the algorithm is positioned as an assurance of impartiality, and how that claim is maintained in the face of controversy. (YouTube’s recommendation algorithm)
5.Entanglement with practice
how users reshape their practices to suit the algorithms They depend on, and how they can turn algorithms into terrains for political contest, sometimes even to interrogate the politics of the algorithm itself. (TikTok creators optimizing content)
6.The production of calculated publics:
how the algorithmic presentation of publics back to themselves shape a public's sense of itself, and who is best positioned to benefit From that. (Twitter trending topics)
What is algorithmic ranking, according to Dean Eckles?
It is the use of algorithms to automatically order or prioritize information (like posts, news) presented to users on digital platforms. Rather than presenting information chronologically or randomly, platforms use algorithms to predict which content will be most engaging or relevant to a user, and then rank the content accordingly.
Why is algorithmic ranking unavoidable on social media platforms?
because the amount of content vastly exceeds what any user can realistically consume. Given the scale of user-generated content, without ranking, platforms would either have to show content randomly or overload users with too much information. Therefore, some form of ranking by engagement, relevance, freshness, or quality is necessary to manage attention and provide a usable, satisfying experience.
In terms of the simplest ranking algorithm, what does a diagram look like (it's in the Eckles reading).
What does the author say about the effects of algorithmic ranking?
algorithmic ranking shapes public discourse and affects individual well-being by influencing what information people see and engage with.
How can you connect algorithms/algorithmic ranking in our everyday media use to the criteria of disruptive change?
Algorithmic ranking connects to disruptive change because it reshapes how people access media, news, and social interactions. Instead of relying on traditional editors, broadcasters, or journalists to filter and prioritize information, algorithms now take over that role.
This disrupts existing power structures in media (journalism, advertising, publishing) and creates new winners and losers based on platform algorithms instead of traditional gatekeepers.
How can you connect algorithms to disruptive innovation theory?
Serve needs differently: Algorithms prioritize engagement and personalization, not journalistic integrity or editorial standards.
Start simple and gain dominance: Early social media feeds were simple chronological lists; now sophisticated algorithms dominate media consumption.
Displace traditional systems: Editors, broadcasters, and publishers have lost their monopoly on information flow because of algorithmic ranking.
Algorithmic culture
is the use of computational processes to to sort, classify, and hierarchize people, places, objects, ideas, the habits of thought, conduct and expression that arise in relationship to those processes
Surveillance
is watching with purpose. It involves the 'focused, systematic and routine attention to personal details for purposes of influence, management, production or direction’.
1950
Alan Turning publishes “computing Machinery and Intelligence” introduced the Turing test
1956
Dartmouth Conference established the field of Ai.(first proposed Ai conference) Organized by John McCarthy, Marvin Minsky,Claude Shannon, and nathaniel Rochester. meeting was organized by John McCarthy, a mathematics professor at the College. In his proposal, he stated that the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
1959
Arthur Samuel coins Term “machine Learning”
AI was one among many name choices for the Dartmouth conference, what were the other original names?
1. Automata Studies
2. Computational Intelligence
3. Machine Intelligence
Turing Test
Alan Turing’s 1950 article ComputingMachinery and Intelligence discussed the conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer, then you should consider it intelligent. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
Intelligence
might be defined as the ability to learn and perform suitable techniques to solve problems and achieve goals, appropriate to the context in an uncertain, ever-varying world. A fully pre-programmed factory robot is flexible, accurate, and consistent but not intelligent.
Artificial Intelligence (AI),
a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do.
Autonomous systems
can independently plan and decide sequences of steps to achieve a specified goal without micro-management. A hospital delivery robot must autonomously navigate busy corridors to succeed in its task. In AI, autonomy doesn’t have the sense of being self-governing common in politics or biology.
Machine Learning (ML)
is the part of AI studying how computer agents can improve their perception, knowledge, thinking, or actions based on experience or data. For this, ML draws from computer science, statistics, psychology, neuroscience, economics and control theory.
supervised learning
a computer learns to predict human-given labels, such as dog breed based on labeled dog pictures;
unsupervised learning
does not require labels, sometimes making its own prediction
tasks such as trying to predict each successive word in a sentence;
reinforcement learning
lets an agent learn action sequences that optimize its total rewards, such as winning games, without explicit examples of good techniques, enabling autonomy.
Deep Learning
is the use of large multi-layer
(artificial) neural networks that compute with continuous (real number) representations, a little like the hierarchically organized neurons in human brains. It is currently the most successful ML approach, usable for all types of ML, with better generalization from small data and better scaling to big data and compute budgets.
Algorithm
lists the precise steps to take, such as a person writes in a computer program. AI systems contain algorithms, but often just for a few parts like a learning or reward calculation method. Much of their behavior emerges via learning from data or experience, a sea change in system design that Stanford alumnus Andrej Karpathy dubbed Software 2.0.
Narrow AI
is intelligent systems for one particular thing, e.g., speech or facial recognition.
Human-level AI, or Artificial General
Intelligence (AGI)
seeks broadly intelligent, context-aware machines. It is needed for effective social chatbots or human-robot interaction.
Human-Centered Artificial Intelligence
AI that seeks to augment the abilities of, address the societal needs of, and draw inspiration from human beings. It researches and builds effective partners and tools for people, such as a robot helper and companion for the elderly.
What is Natale's perspective on AI?
Natale argues that “deception” is at the core of AI systems, and humans are often deceived as a result of their experiences with these systems.
How can you connect recent breakthroughs in AI-powered products with the 5 criteria?
1. Audience
Google: Dominates a massive, broad audience across all age groups globally.
Perplexity: Appeals to early adopters and people looking for AI-enhanced search (faster, more direct answers).
AI search tools like Perplexity are drawing tech-savvy users who want conversational, synthesized results rather than endless lists of links.
2. Culture
Google shaped how people think about information retrieval ("Google it").
Perplexity challenges that culture: users now expect summarized answers, source citations, and natural language interaction, not just sifting through links.
There's a cultural shift toward efficiency and trust-building through AI-sourced information.
3. Speed
Perplexity is faster at delivering concise answers using AI summarization, cutting down on search time.
Google is fast but often sends users across multiple sites; Perplexity minimizes this by doing the thinking and connecting for users right away.
AI-powered search accelerates information consumption.
4. Scale
Google already operates at massive global scale.
Perplexity, while newer, has scalable AI infrastructure (built on LLMs like OpenAI’s models) and could quickly expand because AI models can serve millions simultaneously once scaled.
5. Impact
Perplexity (and similar AI searches) could disrupt traditional search models.
Impacts:
Journalism and SEO industries may need to adapt because fewer users click on original sources.
Advertising models (dominant on Google) may become less effective if AI answers bypass ad clicks.
AI search could restructure the web economy, forcing platforms to rethink visibility and monetization.
What role did Claude Shannon play in the history of machine learning/AI?
Claude Shannon created the language and basic tools that made it possible to think about machines not just as calculators but as devices that could process information, make decisions, and learn setting the stage for all modern AI and machine learning.
Do you know who Alan Turing was and why he is so important?
1950’s Alan Turning publishes “computing Machinery and Intelligence” introduced the Turing Test, a way to measure if a machine could exhibit intelligent behavior indistinguishable from a human’s — a concept that still shapes AI research today.
Turing speculated that machines could learn from experience, a major early insight into what would later become machine learning.
Do you know what the Imitation Game entails?
A man (A)
Participant A tries to convince an interrogator (C) that he is either a man or a woman.
A woman (B)
Participant B also tries to convince the interrogator (C) that she is either a man or a woman.
An interrogator (C)
C cannot see A or B.
C asks questions and tries to guess which one is the man and which one is the woman based only on their responses.
Turing originally describes this game involving gender deception as a thought experiment to introduce a bigger question:
"What if a machine took the place of A?"
Instead of a man and a woman, it would be:
A machine vs. a human, both trying to convince C through conversation that they are human.
If the machine can fool C as often as a real person could, then the machine can be said to "think."
This evolved into what we now call the Turing Test — a way to evaluate a machine’s ability to exhibit intelligent, human-like behavior.
In the original Imitation Game, it’s about guessing gender through dialogue.
Turing extended it to machines imitating humans through language — a revolutionary idea in the history of artificial intelligence.
Turing Trap
If the business model of companies and their financial goals lead them in the direction of automation. It comes with the dangers of being caught in a labor market funk where the average worker has few ways to improve outcomes. Erik Brynjolfsson termed this.
Do companies have more incentives to automate their processes or help their workers?
Generally, companies have stronger incentives to automate their processes rather than invest heavily in supporting workers. Automation can reduce labor costs, increase efficiency, minimize human error, and allow businesses to scale faster without depending on unpredictable human labor.
Which direction will they be temped to go in, and why?
Companies will be tempted toward automation, especially as AI and machine learning technologies become cheaper and more powerful.
Automation promises short-term profits and long-term stability.
What does that do to workers' bargaining power?
Automation weakens workers' bargaining power in many industries.
How could we think about the intersection between AI and the world of work/ skills?
The intersection between AI and work is about skills transformation. Workers will increasingly need skills that complement AI like critical thinking, creativity, emotional intelligence, and complex problem-solving.
Jobs will shift toward areas where human judgment, empathy, and adaptability are needed, because AI still struggles with those qualities.
Economically and socially, we need to think about reskilling workers and building systems that support lifelong learning, not just one-time education.