Artificial Intelligence law at Tanzania

AI Law in Tanzania – Current Framework

Tanzania has not yet implemented a comprehensive AI-specific legal framework, meaning that there is no standalone Artificial Intelligence Act. However, the legal environment for AI in Tanzania is shaped by several existing laws, which apply indirectly to various AI-related issues. These include:

Personal Data Protection Act, 2022 (PDPA)

Tanzania introduced this act to regulate personal data collection, processing, and storage.

AI systems that handle personal data, especially sensitive data, need to comply with requirements such as transparency, obtaining consent, and ensuring data subject rights (e.g., access, correction, and the right to object to automated decision-making).

Cybercrimes Act, 2015

This law criminalizes actions like unauthorized access to computer systems, hacking, data tampering, fraud, and cyberbullying. While it doesn't specifically address AI, any AI system used to commit cybercrimes (e.g., fraud, data theft) would fall under its provisions.

Copyright and Neighbouring Rights Act, 1999

Tanzania’s copyright laws are based on human authorship, which presents challenges when dealing with works created autonomously by AI, such as artworks, music, and software generated by AI.

Intellectual Property Laws

Intellectual property (IP) law does not yet address AI's role in invention and innovation. Patents, for instance, assume human inventorship, leaving AI-driven innovations in a grey area with respect to IP protection.

Constitutional Protections

Tanzania's Constitution guarantees privacy, protection from arbitrary interference, and the right to information. These fundamental rights may come into play when AI systems process personal or sensitive information.

Key Issues with AI in Tanzania’s Current Legal Framework

Lack of Clear Liability Rules: Tanzania’s legal system lacks clear guidelines on liability when AI systems cause harm. If an AI system causes an accident, errors in decision-making, or data breaches, it's not clear whether the AI developer, user, or other parties are responsible.

Accountability and Transparency: There is no established framework for ensuring AI systems are transparent, accountable, and fair. Issues like algorithmic bias and discriminatory decision-making could arise without proper oversight.

AI in Public Sector: AI applications in government services (e.g., welfare distribution, public safety) raise concerns over fairness, discrimination, and human rights violations.

Intellectual Property: The absence of a legal framework for AI-generated works could lead to uncertainty in creative industries. For example, works produced by AI may not be eligible for copyright protection, creating confusion about ownership.

Detailed Hypothetical Case Scenarios

Case 1: AI-powered Loan Approval System Discriminates

Scenario: A Tanzanian bank uses an AI-based loan approval system to process applications. The AI system uses historical data to predict the likelihood of repayment, but it inadvertently discriminates against individuals from certain regions, ethnic groups, or with lower socioeconomic status.

Legal Issues:

Discrimination: The AI system may violate anti-discrimination laws (whether directly or indirectly) if it discriminates against certain groups.

Data Protection: The system processes sensitive personal data, which must be handled according to the PDPA (e.g., obtaining consent and allowing individuals to challenge automated decisions).

Liability: The bank and the AI provider may be held accountable for any harm caused by biased decisions made by the AI system.

Likely Outcome: The bank could face legal challenges from rejected applicants, and the financial regulator may require an audit of the AI system. The bank may be compelled to modify or suspend the use of the system to ensure fairness and transparency.

Case 2: AI-Generated Art and Copyright

Scenario: A Tanzanian advertising agency uses an AI tool to generate artistic content for campaigns. The agency then attempts to sell or license the artwork. However, when another artist claims the work is similar to their own, the question arises: who owns the copyright to AI-generated content?

Legal Issues:

Copyright Law: Tanzanian copyright law assumes that an author is a human. If AI creates a piece of work autonomously, there is no clear legal provision on who owns the copyright.

Intellectual Property: The agency may struggle to claim exclusive ownership of the AI-generated work since copyright laws do not currently recognize AI as an author.

Fair Use & Infringement: If the AI was trained on copyrighted works without permission, the generated content could be considered derivative or infringing.

Likely Outcome: The advertising agency might not be able to claim full copyright, and the AI-generated work could be deemed ineligible for protection. If the work is similar to existing copyrighted pieces, the agency may face legal action for infringement.

Case 3: AI in Healthcare — Misdiagnosis Due to Algorithmic Error

Scenario: A private hospital in Tanzania uses an AI system to assist in diagnosing diseases from medical images (e.g., X-rays). The system incorrectly diagnoses a patient with a serious condition, leading to unnecessary treatments.

Legal Issues:

Medical Negligence: The hospital may be liable for using an AI system that led to a misdiagnosis, particularly if the AI system was not sufficiently tested or regularly audited.

Data Protection: The AI system processes sensitive health data, which must be protected under the PDPA.

Informed Consent: Patients may not have been properly informed that an AI system was involved in their diagnosis, which could violate their rights to transparency.

Likely Outcome: The hospital may face a lawsuit for medical negligence and breach of data protection laws. The hospital could be required to compensate the patient, and there may be regulatory scrutiny over the AI system’s implementation.

Case 4: AI-Driven Job Recruitment System Favors Certain Groups

Scenario: A Tanzanian tech company uses an AI-powered recruitment tool to screen applicants for job positions. The AI system automatically rejects applications from certain demographic groups, such as older applicants, based on patterns learned from previous hiring data.

Legal Issues:

Anti-discrimination: If the AI system causes discrimination against protected groups (e.g., based on age, gender, or ethnicity), it may violate Tanzania's equality and anti-discrimination laws.

Data Protection: The AI system processes personal data, and candidates must have access to information on how their data is used and the logic behind the AI’s decisions.

Fairness: The AI's decision-making process may lack transparency, leaving candidates unable to challenge unfair decisions.

Likely Outcome: The company may face legal challenges from applicants who feel they were unfairly rejected. The company may need to retrain the AI system to remove biases and could be fined by regulatory authorities.

Case 5: Use of AI for Automated Tax Assessment

Scenario: The Tanzanian Revenue Authority (TRA) deploys an AI system to automatically assess and calculate taxes based on data provided by businesses. The AI makes an error, leading to over-taxation of some companies.

Legal Issues:

Tax Law: The AI system must comply with Tanzanian tax laws, which are subject to interpretation. The automated system might misinterpret data or fail to account for tax exemptions, leading to disputes.

Data Protection: The TRA processes sensitive financial data, and businesses have a right to know how their data is being processed and how decisions are made.

Accountability: The TRA must ensure that businesses have a way to appeal the AI-generated tax assessments.

Likely Outcome: Companies that are over-taxed could challenge the system, and the TRA may be required to review or halt the use of the AI system. Regulatory measures may be introduced to ensure fairness and prevent errors in automated tax assessments.

Case 6: AI in Social Welfare – Risk of Inaccurate Benefits Distribution

Scenario: A Tanzanian government agency uses AI to assess eligibility for social welfare benefits. The AI makes errors in assessing the eligibility of individuals, leading to some citizens being denied benefits while others receive undeserved payments.

Legal Issues:

Human Rights: Citizens may argue that the AI system violates their rights to social protection, especially if they are unfairly denied benefits due to algorithmic errors.

Transparency: There must be clear rules on how AI is used to make decisions about welfare, and individuals should be able to challenge decisions.

Accountability: If the AI system leads to widespread errors, the government agency may be held accountable for mismanagement.

Likely Outcome: Affected citizens could sue the government for violations of their rights. The government may be required to halt the AI system, improve its accuracy, or adopt a more transparent decision-making process.

Conclusion

While Tanzania has begun regulating certain aspects of AI use, especially in data protection and cybersecurity, the country lacks a dedicated AI law. This means that many AI-related issues are still handled by existing laws, which may not be adequate for the unique challenges posed by AI technologies. The biggest challenges involve liability, transparency, and accountability, particularly when AI systems make critical decisions in sectors like healthcare, finance, and public welfare.

As AI technologies become more widespread in Tanzania, it is highly likely that new laws and regulations will need to be developed to address these gaps, ensuring fairness, accountability, and transparency in AI usage across all sectors.

LEAVE A COMMENT