Freedom of Expression in ICT607 - IT Ethics and Professional Practice

Learning Aims

  • Understand freedom of expression concepts and the tension between free speech and harmful content.
  • Compare Australian and US approaches to freedom of expression protection.
  • Explain the role of the eSafety Commissioner and the Online Safety Act 2021 in Australia.
  • Define defamation, slander, and libel, and understand Australian defamation law reforms (2024).
  • Analyse types of expression NOT protected under law (obscenity, defamation, incitement).
  • Examine key freedom of expression issues: Internet censorship, anonymity, hate speech, pornography, and fake news.
  • Evaluate ethical responsibilities of digital platforms in content moderation.
  • Apply critical thinking to contemporary cases involving online speech and censorship.
  • Discuss the balance between protecting free expression and preventing online harms.

Understanding Freedom of Expression

  • Complex Trade-offs: Decisions regarding IT security require weighing various business and ethical trade-offs.
  • Resource Allocation: Organizations must debate how much to allocate towards safeguarding against computer crime without bankrupting themselves.
  • Business Impact: Recommended IT security safeguards (like multi-factor authentication) can complicate business operations, potentially frustrating users and causing lost sales.
  • Ethical Responses to Cybercrime: Companies face dilemmas in responding to cybercrime, such as whether to pursue legal action or prioritize public relations.

US First Amendment Rights

  • First Amendment: Enacted in 1791, it protects freedom of speech and expression, creating one of the strongest free speech protections worldwide.
  • Key Protections: Includes freedom of speech and press, freedom of religion, right to assemble peacefully, right to petition the government, and right to anonymous online speech.
  • Relevance to IT Ethics: Major tech companies are American, thus adhering to US speech laws that shape global content policies.

Australian Freedom of Expression

  • Lack of Explicit Constitutional Rights: Australia does not have an explicit constitutional right to freedom of expression, unlike the US First Amendment.
  • Implied Freedom of Political Communication: The High Court recognizes an implied freedom mainly applicable to political communication, limiting broad free speech rights.
  • Regulatory Balance: Australia balances freedom of expression with laws preventing harm, reflected in stricter regulations regarding defamation and hate speech.

Types of Expression NOT Protected

In the United States (First Amendment excludes protections for):
  • Perjury and Fraud: Lies under oath or deception for financial gain.
  • Defamation: False statements harming others' reputations.
  • Obscene Speech: Content failing the Miller Test.
  • Incitement of Panic: E.g., falsely shouting "fire" in a crowded theater.
  • Incitement to Crime: Urging imminent lawless action.
  • Fighting Words: Words provoking immediate breach of peace.
  • Sedition: Speech undermining government authority.
In Australia (Additional Restrictions):
  • Racial Vilification: Covered under the Racial Discrimination Act (Section 18C).
  • Serious Online Abuse: Governed by the Online Safety Act 2021.
  • Refused Classification Content: Material not allowed due to legal definitions.
  • Contempt of Court: Restriction on expressions impacting legal proceedings.

Obscene Speech and the Miller Test

  • Definition of Obscenity: Society's perception of seriously offensive content outside normal decency standards.
  • Miller Test: Criteria to determine legality of obscenity:
    • (1) Content primarily designed to create sexual excitement.
    • (2) Content shows sexual behavior offensively as per law.
    • (3) Content lacks serious redeeming value (educational, artistic, etc.).
  • Community Standards: Variability in community standards complicates online content regulation.

Defamation: Definition and Types

  • Defamation Defined: False statements harming another's reputation.
  • Key Elements of Defamation:
    • Statement must be published to a third party.
    • Statement must be false (truth is a defense).
    • Statement must cause reputational harm.
    • Statement must refer to an identifiable person or entity.
  • Types of Defamation:
    • Slander: Oral defamation, typically temporary harm.
    • Libel: Written defamation, historically permanent and more serious.
  • Digital Age Treatment: Online defamation treated as libel.
  • Legal Nature: Defamation is a civil matter; typically, plaintiffs seek damages, not criminal prosecution.
  • Chilling Effect: Fear of lawsuits can hinder legitimate criticism and discourse.

Australian Defamation Law Reforms (2024)

Part 1
  • Update Objective: Reforms reflect the realities of online publishing and social media where harmful content spreads rapidly.
  • Stage 2 Reforms: Started in July 2024, focusing on defamation spread via digital intermediaries.
  • Key Features:
    • “Innocent Dissemination” Defense: Requires platforms to have a clear complaints process.
    • Court Orders for Content Removal: Courts can compel platforms to remove defamatory material.
    • Legal Protection for Whistleblowing: Reinforced legal safety for reports made to authorities.
Part 2
  • Important Changes:
    • Safe harbor for platforms when acting quickly on valid complaints (7-day removal timeframe).
    • Courts can direct platforms to remove content not under litigation.
    • Specific protections exist for victim-survivors of crime.
    • Single Publication Rule: Legal claims triggered by initial publication rather than subsequent views/shares.
  • Balancing Act: Reforms aim to protect reputations while allowing for open discussions online.

Case Study: Defamation in the Digital Age

Example: Geoffrey Rush vs Daily Telegraph (2018-2019)
  • Overview of Case: Actor Geoffrey Rush sued News Corp's Daily Telegraph over defamatory articles.
  • Finding: Federal Court ruled articles were not substantially true, granting Rush $2.9 million in damages, highlighting significant media responsibility.
  • Key Issues:
    • Concerns over media due diligence and “trial by media” impact.
    • Balancing press freedom with individuals' reputations.

eSafety Commissioner and Online Safety Act 2021

  • Establishment: Introduced in 2015, the eSafety Commissioner is Australia's first dedicated online safety regulator.
  • Online Safety Act 2021 Role: Expanded eSafety's powers, including the authority to remove harmful online content.
  • Key Regulatory Schemes:
    1. Cyber-abuse prevention targeting adults.
    2. Cyberbullying measures for children.
    3. Image-based abuse, focusing on non-consensual intimate image sharing.
    4. Removal of illegal content including exploitation material.
  • Operational Mechanism: Involves complaint handling, investigation, and compliance measures with severe penalties for non-compliance.

eSafety Powers and Responsibilities

  • Content Removal Authority: eSafety can issue removal notices requiring prompt compliance by platforms.
  • Preventative Roles: eSafety conducts educational outreach and research on online safety issues.
  • Protection Roles: Investigation and enforcement of compliance, while developing industry expectations for online safety.
  • Criticism: Concerns exist about eSafety’s perceived overreach conflicting with free speech rights.

Controlling Access to Information on the Internet

US Legislative Attempts to Protect Children Online:
  • Communications Decency Act (CDA) 1996: Most provisions ruled unconstitutional, though Section 230 survives giving ISPs immunity.
  • Child Online Protection Act (COPA) 1998: Overruled for being overly broad.
  • Children's Internet Protection Act (CIPA) 2000: Mandates federal funding recipients to block inappropriate material.
  • Key Challenge: Balancing child protection with adults' rights to access information.

Internet Filters and Content Blocking

  • Filtering Types:
    • URL Filtering: Blocks known harmful websites.
    • Keyword Filtering: Blocks specific content based on keywords.
    • Dynamic Content Filtering: Analyzes and blocks real-time traffic.
  • Applications: Used in schools, workplaces, and at national levels.
  • Limitations of Filtering: Includes risk of overblocking and user circumvention methods.
  • Ethical Dilemmas: Decisions on what constitutes inappropriate content are subjective.

Digital Millennium Copyright Act (DMCA)

  • Overview: Enacted in 1998 addressing copyright law in the digital realm.
  • Safe Harbor Provisions: Protect ISPs from liability under certain conditions.
  • Takedown Process Workflow: Outlined process for copyright holders to act against infringing content.
  • Criticism of DMCA: Potential misuse for censorship purposes and challenges in balancing copyright with free expression.

Internet Censorship

  • Definition of Internet Censorship: Control over online publishing and access.
  • Methods of Censorship: May include website blocking, keyword restrictions, and content surveillance.
  • Censorship Actors: Include governments, ISPs, platforms, and employers.
  • Global Context: Variation in internet freedom correlates with government policies; heavily censored environments exist in certain countries while others enjoy more freedoms.

Internet Censorship Around the World

  • Case Studies:
    • China: Implements the "Great Firewall," restricting access to major platforms and sensitive topics.
    • Russia: Blocks opposition sites and applies “extremism” laws against critics.
    • Iran: Employs heavy monitoring and actively disrupts internet access.
    • Democratic Countries’ Strategies: employing regulatory frameworks to manage harmful content while navigating balances with free speech rights.

SLAPP Lawsuits

  • Definition: Strategic Lawsuit Against Public Participation designed to intimidate critics.
  • Mechanics of SLAPPs: Corporations may file lawsuits against critics to suppress dissent.
  • Anti-SLAPP Laws: Many US states have developed legal frameworks combating SLAPPs, while Australia lacks comprehensive legislation.

Anonymity on the Internet

  • Definition and Benefits: Anonymity allows individuals to express opinions without identifying themselves, beneficial for vulnerable populations.
  • Risks of Anonymity: Includes potential for harassment, trolling, and loss of accountability.
  • Anonymity Tools: Utilization of technologies like VPNs and anonymous remailers.

Doxing and John Doe Lawsuits

  • Doxing Definition: Publishing personal information without consent as a form of harassment.
  • John Doe Lawsuit Function: Enables plaintiffs to uncover the identity of anonymous defendants.
  • Ethical Tension: Balances the right to anonymity with the necessity for accountability regarding harmful speech.

Discussion Question 1

  • Topic: Should platforms verify real identities to protect against abuse, or should anonymity be upheld? Consider benefits and risks of both

Hate Speech

  • Definition: Speech aimed at vilifying individuals based on identity characteristics.
  • Australian Approach: Laws exist to restrict hate speech, particularly through the Racial Discrimination Act (Section 18C).
  • Platform Removal Policies: Social media companies have authority to enforce their own content removal policies beyond legal requirements.

Discussion Question 2

  • Topic: Navigating the balance between free speech and hate speech regulations, concerns over government and platform roles in determining acceptable speech.

Pornography on the Internet

  • Legal Framework: Adult pornography is accessible but raises issues regarding child protection.
  • Community Standards Variability: Differing standards complicate what is deemed obscene in various jurisdictions.
  • Workplace and Child Protection Issues: Policies should be in place to regulate workplace access to pornographic material.

Sexting and Legal Risks

  • Definition: Sending sexually explicit material through electronic means.
  • Legal Implications for Minors: Youth risking prosecution under child exploitation laws irrespective of consent.
  • Ethical Concerns: Legal guidelines may disproportionately affect young people's future opportunities.

CAN-SPAM Act

  • Overview: Regulates commercial email practices to prevent spam and protects consumer rights.
  • Key Requirements: Include accurate sender identification, the ability to unsubscribe, and the inclusion of physical addresses.

Fake News and Misinformation

  • Definitions: Distinguish between fake news, misinformation, and disinformation.
  • Impact on Society: Threatens trust in journalism, influences elections, and poses risks to public health.

Combating Misinformation: Challenges and Approaches

  • Identification Issues: Subjective challenges in defining misinformation, risks involved with censorship.
    • Approaches: Media literacy education, fact-checking initiatives, transparent media disclosure.

Case Study: Australian Misinformation Debate (2024)

  • Proposed Legislation: Introduced but withdrawn due to contention over government powers in managing misinformation.

Content Moderation by Platforms

  • Responsibilities of Social Media Platforms: Need to balance operational content moderation with free expression while facing issues like scale and bias.

Freedom of Expression in the Workplace

  • Employer Rights to Restrict Speech: Policies limiting employee use of company resources should be robust and clear.
  • Employee Protections: Limited rights exist in public sectors and under whistleblower laws.

Manager's Checklist for Freedom of Expression Issues

  • Key Considerations for Managers: Developing clear policies, enforcing monitoring, and legal coordination for handling defamatory content.

Ethical Balance: Freedom vs. Harm Prevention

  • Tension: Protecting expression while preventing harm requires continuous negotiation among evolving standards.
  • Concluding Thoughts: Clear governance structures, transparency, diverse inputs, and ongoing evaluations are critical as society grapples with these challenges.

Activity: Content Moderation Policy Design

  • Scenario: Create content moderation guidelines for a new social media platform. Consider types of content, reasoning, enforcement mechanisms, and censorship concerns.