top of page
Logo 1 FINAL Orizontal 2.png

AI Ethics in Action: Crafting Policy for Responsible AI


ree

Regulation of Artificial Intelligence is picking up around the world following explosive growth in this new technology. In the United States, a multifaceted approach to AI governance is emerging, consisting of White House Executive Order that set high-level priorities and guidelines, federal initiatives that address specific sectors or concerns, and state-level actions that tailor regulations to local needs.  


Existing regulatory agencies, such as the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB) and others are also playing a significant role, leveraging their authority to investigate and enforce existing consumer protection and competition laws in the context of AI technologies. This dynamic landscape reflects the ongoing efforts to balance innovation with the responsible and ethical development of AI in the United States. But to this date a comprehensive, federal regulation similar to the recently voted EU AI Act is still missing. Consequently, enterprises in the US must navigate a complex regulatory landscape, as the specific compliance requirements they encounter can vary significantly from one country or region to another.


Ahead of the Curve: Tomorrow's AI Policy Landscape

While waiting for a federal regulation on AI, US-based organizations can get a headstart by focusing on industry-specific rules that have emerged over the course of the last two years. For example, earlier this year, the FTC issued multiple rulings on artificial intelligence. Some of these include:


  • Protections against AI-enabled scam calls, including telemarketing protections to business-to-business calls

  • New guidance on how privacy changes must be communicated to customers

  • The FTC finalized its rule on AI impersonalization of businesses and governments to scam consumers


In September of 2023, the CFPB issued guidance on how AI and complex machine learning models can impact the legal requirements that lenders must meet when making credit decisions.


Just last week the US Senate Committee on Homeland Security published their findings and conclusion on the use of AI in financial services. In short, the report discusses the increasing use of artificial intelligence (AI) and machine learning (ML) by hedge funds to inform trading decisions. While AI/ML technologies offer potential benefits, they also raise concerns about inadequate disclosures to clients, increased market risks, and the amplification of traditional investment risks. The recommendations emphasize the need for human oversight and accountability in AI systems to ensure safety, accuracy, and ethical decision-making


On April 29 2024, another government agency the National Institute for Technical Standards (NIST) issued a companion resource to the NIST AI RFM, specifically for Generative AI. This guidance is designed to help organizations manage the unique risks associated with Generative AI, such as confabulation, data privacy issues, and the generation of harmful content. It emphasizes that these risks can appear throughout the AI lifecycle and provides actions to mitigate them, categorized by the AI RMF’s subcategories. The draft stresses the importance of understanding legal requirements, incorporating trustworthy AI characteristics into policies, and conducting thorough risk assessments.


To help you get started on the above NIST recommendations, I have prepared a template of what an AI policy document could look like at an organizational level.


Template for AI Policy


Purpose for AI Policy
  • To establish guidelines and best practices for AI: The NIST guidelines emphasize that these guidelines should be risk-based, considering the specific applications and potential impacts of AI within the organization. This includes aligning AI usage with applicable laws and regulations, particularly those concerning data privacy and intellectual property.

  • Ensure the AI usage aligns with your company’s core values: The NIST guidelines highlight the importance of integrating trustworthy AI characteristics into organizational policies. This means ensuring that AI is used in a way that aligns with the company's values and promotes ethical considerations.

  • Promote stakeholder safety and welfare: The NIST guidelines stress the need to consider the broader impacts of AI on individuals, communities, and society. This includes mitigating risks such as bias, discrimination, and the generation of harmful content.


Defining the scope for use of AI
  • Confirm that your AI policy covers all of your employees, contractors, and partners: The NIST guidelines recommend defining and communicating organizational access to AI through various functions, including management, legal, and compliance.

  • Determine which AI tools, systems, and platforms are in use and covered, including: The NIST guidelines suggest maintaining an inventory of AI systems, documenting their intended use, capabilities, and potential risks.

  • Ensure your policy allows the integration of future AI technologies or tools: The NIST guidelines emphasize the need for ongoing monitoring and periodic review of AI systems to adapt to advancements and emerging risks.

Third-party AI Systems
  • Verify that any third-party providers meet your legal and ethical standards: The NIST guidelines provide detailed instructions on managing AI risks associated with third-party entities, including conducting due diligence, establishing clear contracts, and monitoring for compliance.


Ethical use of AI
  • Avoid causing harm or facilitating malicious use of AI: The NIST guidelines outline various risks associated with Generative AI, such as the generation of harmful content, disinformation, and privacy violations. The policy should include measures to mitigate these risks.

  • Ensure transparent use of AI, prioritize transparency throughout AI life-cycle: The NIST guidelines emphasize the importance of transparency and accountability in AI systems. This includes documenting the AI development process, data sources, and model architectures.

  • Make stakeholders aware of the involvement of such tools: The NIST guidelines recommend disclosing the use of AI to end-users and establishing clear communication channels for feedback and concerns.

  • Utilize centralized AI governance for transparent record-keeping: The NIST guidelines suggest documenting AI system ownership, intended use, and assumptions and limitations.

  • Take responsibility for outcomes generated by AI systems: The NIST guidelines highlight the need for organizational accountability for AI systems, including establishing policies for incident response and redress mechanisms.


Data Protection
  • Ensure all personal or sensitive data used by AI systems is anonymized: The NIST guidelines provide detailed instructions on measuring and managing privacy risks associated with AI systems, including anonymizing data and implementing robust cybersecurity measures.

  • Securely store all AI data: The NIST guidelines recommend establishing protocols and access controls for training data and ensuring compliance with data protection regulations.

  • Adhere to the company’s data protection policies: The NIST guidelines emphasize the importance of aligning AI policies with existing data governance policies and legal requirements.

  • Establish clear data retention and deletion policies for AI data: The NIST guidelines suggest implementing data security and privacy controls for stored and decommissioned AI systems.


AI Regulations (local and regional)
  • Make sure you adhere to laws such as: The NIST guidelines provide a comprehensive overview of legal and regulatory requirements involving AI, including data privacy, security, and non-discrimination.

  • Stay informed about local and regional AI regulations to ensure compliance: The NIST guidelines recommend aligning GAI use with applicable laws and policies and establishing transparent acceptable use policies.


Bias Mitigation
  • Actively identify any potential biases in your AI tools: The NIST guidelines provide detailed instructions on measuring and mitigating biases in AI systems, including conducting fairness assessments and implementing bias mitigation techniques.

  • Ensure content generated by AI is inclusive and doesn’t inadvertently discriminate: The NIST guidelines emphasize the importance of considering fairness and bias throughout the AI lifecycle, from data collection and model development to deployment and monitoring.

  • Carry out regular reviews and update AI models to reduce and eliminate biases: The NIST guidelines recommend establishing processes for ongoing monitoring and periodic review of AI systems to identify and address biases.


AI Collaboration (Human-in-the-loop)
  • Active human input and expertise into the AI lifecycle: The NIST guidelines highlight the importance of human involvement in AI systems, including defining roles and responsibilities for human-AI configurations and oversight.

  • Humans actively participate in training, evaluation, and/or operation of ML models, providing feedback, annotation and guidance: The NIST guidelines recommend establishing procedures for human oversight and incorporating human review processes to assess and filter AI-generated content.

  • Utilize AI as a tool to be used alongside humans, not as a full replacement: The NIST guidelines emphasize that AI should augment human capabilities, not replace them, and that human judgment should be incorporated into AI decision-making processes.


Training and Development
  • Ensure that anyone who works with AI has received training on how to utilize the technology: The NIST guidelines recommend providing training to AI actors and users on AI fundamentals, risks, and responsible use.

  • Stay updated on the latest developments in AI: The NIST guidelines emphasize the need for continuous learning and awareness of evolving AI technologies and best practices.

  • Conduct refresher courses to make sure your team can continue with responsible AI usage: The NIST guidelines suggest establishing continuous improvement processes for AI risk management and incorporating feedback from AI actors and users.


Reviewing the Policy
  • Make sure your policy is up to date and relevant with regular reviews: The NIST guidelines recommend ongoing monitoring and periodic review of AI risk management processes and outcomes.

  • React to significant AI-related incidents or advancements and update your policy accordingly: The NIST guidelines emphasize the need to adapt to emerging risks and advancements in AI technology.

  • Communicate any changes to your AI policy: The NIST guidelines recommend establishing clear communication channels for disseminating AI policies and updates to all relevant stakeholders.


Enforcing the Policy
  • Ensure your team is aware that any breaches of this policy may result in disciplinary action: The NIST guidelines recommend establishing policies for individual and organizational accountability regarding the use of AI.

  • Implement ways to monitor adherence to the AI policy: The NIST guidelines suggest implementing monitoring mechanisms and establishing clear channels for reporting potential breaches.

  • Establish clear channels for reporting potential breaches: The NIST guidelines recommend establishing incident response plans and procedures for addressing and recovering from AI incidents.


In conclusion, the evolving landscape of AI regulation, particularly for Generative AI, presents a challenge for businesses. However, the establishment of a robust responsible AI (RAI) initiative can serve as a proactive measure. By adhering to core principles of accountability, transparency, privacy, security, fairness, and inclusiveness in AI development and deployment, companies can create a framework that not only aligns with current best practices but also prepares them for future regulatory changes, regardless of their specific form or regional variations.


Welcome any comments. Until next time.


 
 
 

Recent Posts

See All
Why AI Ethics Oversight Can't Wait ⏰ Hello!

According to this article in Hackernoon 97% of CIOs and CTOs are worried about unethical AI use at their companies. Yet only 1 in 3 have oversight in place. https://lnkd.in/grYgPMjF The cost of opacit

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Logo mark.png

Ⓒ 2021 Nestor Global Consulting. All rights Reserved.

Website by Daiana Schefler with Dysign.

bottom of page