Norwest guidance and insights — directly to your inbox.

Sign up for our Navigate newsletter! Get Norwest guidance and insights on building an enduring business.

Subscribe Now

Sign up

What matters to you matters to us! Customize your newsletter–tell us what you're most interested in and we'll handle the rest.

loader image

Resources

Blog

SHARE:

January 4, 2024

Ethical AI for Startup Founders and Teams: Key Questions and Foundational Steps

Keren Bitan is the founding principal at Tandem Impact, a boutique sustainability consulting firm. She specializes in advising investment firms — including Norwest — and high-growth companies on responsible investing and sustainable business growth. Keren is available to provide guidance to Norwest portfolio companies as part of our robust portfolio services offering. For any inquiries, reach out to Norwest Principal, CMO and Operating Executive Lisa Ames.


 

With the rise of generative AI, most startups are either developing new products and services utilizing artificial intelligence (AI) or machine learning (ML) and/or integrating AI/ML into their operations from hiring to content marketing. At the same time, stakeholders in their companies — including investors, regulators, governments, customers, and employees — are increasingly asking how companies are designing and deploying ethical AI.

How can founders ensure their companies are using and developing AI responsibly? And what does ethical AI even mean? In this piece I’ll give a high-level overview of select ethical AI considerations, and some practical steps founders and tech leaders can take to address risks.

What Is Ethical AI?

Ethical AI refers to the practice of ensuring AI is developed and utilized in keeping with relevant morals, values, and principles. In 2020, the Berkman Klein Center for Internet & Society at Harvard University published a landmark paper aiming to represent a consensus in AI-related principles put forth by governments, intergovernmental organizations, companies, professional associations, advocacy groups, and multi-stakeholder initiatives. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI” (and the associated Principled AI Map) drew from 36 prominent AI principles documents to identify eight key themes related to developing and deploying AI responsibly and ethically:

    1. Privacy
    2. Accountability
    3. Safety and security
    4. Transparency and explainability
    5. Fairness and non-discrimination
    6. Human control of technology
    7. Professional responsibility
    8. Promotion of human values

For entrepreneurs whose companies develop or utilize artificial intelligence systems, it’s useful to think about these themes across a few dimensions:

    1. AI system development — when creating a system like ChatGPT, what are the risks and considerations in development and implementation (e.g., privacy and labor implications)?
    2. Tool use in company operations — when integrating AI into hiring procedures, for example, what are potential risks?
    3. Integration into development of products and services — e.g. Zoom’s almost implemented policy that would have meant the company could train its AI on customer data with no opt-out option (had it not been noticed by journalists), or utilizing AI technology as part of the customer service experience.

Let’s dive into five of the key themes from the Principled AI Map that were most common in their data set, and that company leaders are often most focused on initially:

    • Privacy
    • Accountability
    • Safety and security
    • Transparency and explainability
    • Fairness and non-discrimination

Below I’ve outlined representative questions to consider across these themes when developing or implementing AI systems. To wrap up, I’ll share three foundational steps for implementing ethical AI practices: designing an ethical AI value system, incorporating ethical AI education into company resources, and adopting tools to help teams evaluate risk and consequences.

Prioritize Privacy

AI often relies on tremendous amounts of data, and can present a variety of privacy issues. In fact, 97 percent of the Principled AI map documents included privacy as an important consideration when building and deploying AI systems. For generative AI systems, privacy issues may be foundational to their existence.

Privacy in this context refers to, “the idea that AI systems should respect individuals’ privacy, both in the use of data for the development of technological systems and by providing impacted people with agency over their data and decisions made with it.”

The US Federal Trade Commission recently ordered WW International (previously Weight Watchers) to delete their algorithms built off of data the company didn’t have permission to use. Privacy is a complex issue that won’t be fully solved any time soon — with that in mind, tech teams should continue to prioritize privacy considerations when developing or deploying AI systems.

Select Privacy Questions

    1. Are our tools in compliance with existing regulations like GDPR or CCPA?
    2. Should our company be using tools that intrinsically compromise privacy and may not comply with existing privacy regulations? If we do, what risks are we taking, and how are we prepared for those risks?
    3. How can we ensure we prevent Personal Identifiable Information (PII) data from leaking through the use of generative AI tools? (For example, ChatGPT Enterprise promotes “enterprise-grade security and privacy” – does this satisfy our requirements?)
    4. In developing AI systems, are we enabling users to consent to or refuse the use of their personal information? Are users able to amend data if it is incomplete, or remove personal data?
    5. How can we ensure privacy by design, or the integration of privacy into how we build our AI system and the lifecycle of the data in our system?

Implement Mechanisms for Accountability

Accountability as a theme is present in 97 percent of the Principled AI Map’s reviewed documents. The Principled AI piece highlights that AI must be developed with appropriate mechanisms to ensure that it is benefiting humanity and not causing harm. As the regulatory environment struggles to keep up with the pace of technology development, leaders need to craft their company’s approach to ensuring ethical AI practices are actually integrated into product and operations decisions. At the same time, companies will also need to comply with upcoming regulation (e.g., the European Union’s AI Act).

The duty to ensure ethical AI development and use should not reside inside of the technology, but instead it should be spread across those who are designing, developing, and deploying the system.

Select Accountability Questions

    1. What are the checkpoints as a product or feature is being developed and before a product or feature launches where the team reviews and ensures risks have been mitigated?
    2. How will we discern which use cases are most risky, and enable our teams to integrate AI when there is less risk but monitor when there is more risk?
    3. Are relevant stakeholder groups identified and consulted when designing and using AI systems? Stakeholders might include customers, users, employees, investors, ethics experts, communities impacted by the system, and others.
    4. Who will be responsible for developing our ethical AI policy and accountability mechanisms (including those related to product and operations)? Do we have a monitoring body to ensure accountability?
    5. Are there public commitments our team can make or sign that signal our approach and help hold our company accountable (e.g., Adobe’s Commitment to AI Ethics, Salesforce’s AI Acceptable Use Policy, Responsible Innovation Labs Responsible AI Commitments)? If you make a commitment, make sure you have the internal capacity to follow through.
    6. How are we evaluating the accuracy of output information? (Accuracy is another important concept to explore.)

Evaluate Safety and Security

In developing the Principled AI map, about 75 percent of reviewed documents referenced safety and security. Safety focuses on an AI system that internally functions properly and avoids unintended harms to humans and the planet during development and post-deployment. Security refers to the ability to avoid and protect against external threats on an AI system. Notably, the White House’s recent Executive Order highlighted safety, security, and trustworthiness in both development and use of artificial intelligence.

Select Safety and Security Questions

    1. Are we well versed on how we are ensuring AI systems we use or develop are safe and secure, and are we able to share this with stakeholders? For builders: How can we build and test this system to ensure we are preventing any possible misuse?
    2. Are AI tools we use or develop avoiding harm towards living beings and the environment throughout their lifecycle? Are we clear on the supply chain for developing these systems, and is it an ethical supply chain?
    3. Are we adequately applying existing security standards and regulations to this AI tool or use case?
    4. Is our team protecting sensitive information when utilizing AI software (e.g. anonymizing and encrypting data)?
    5. Are we developing and implementing relevant policies and resources to help employees decide when and how to use generative AI specifically? (See this checklist from the Future of Privacy Forum.) Do we offer training for relevant teams on proper AI usage?

Examine Transparency and Explainability to Tackle AI Governance

AI systems should not be mysterious black boxes, but as the Principled AI paper states, “Perhaps the greatest challenge that AI poses from a governance perspective is the complexity and opacity of the technology.” The theme of transparency (seeing how an AI model is working) and explainability (understandable concepts and outputs that can be outlined and evaluated) highlights the importance of oversight. While there has been a debate about how accurate a transparent model can be, recent research aims to show AI can be both accurate and transparent.

In July 2023, seven large tech companies agreed to voluntary commitments regarding “safe, secure and trustworthy AI.” However, University of Washington Professor Emily Bender notes a major missing piece: data and model documentation. “Without documentation, one cannot try to understand training data characteristics in order to mitigate some of these attested issues or even unknown ones. The solution, we propose, is to budget for documentation as part of the planned costs of dataset creation, and only collect as much data as can be thoroughly documented within that budget.” See more here about risks related to Large Language Models (LLMs) like ChatGPT.

Select Transparency and Explainability Questions

    1. Can our team have oversight of the operations of this AI model? Is there an ability to “see into” the model throughout the design, development, and deployment of this AI system?
    2. Are we able to translate technical concepts and decision outputs into a comprehensible outline that enables evaluation?
    3. Do we have appropriate documentation to understand the potential unintended consequences of this tool?
    4. Are we budgeting for appropriate governance mechanisms?

Ensure Fairness and Non-discrimination in Product Development and Internal Operations

Concerns about encoding bias in AI tools have been voiced for years. For example, Safiya Noble’s seminal book Algorithms of Oppression was based on research on Google search algorithms, examining search results from 2009 to 2015. Fairness and non-discrimination principles were noted in all the documents reviewed by the Principled AI Map team.

Algorithmic bias or “the systemic under- or over- prediction of probabilities for a specific population” can trickle into AI systems in a variety of ways. The Principled AI paper explains some of these ways, including:

  • a system might be trained on unrepresentative, flawed or biased data
  • the predicted outcome may be a bad proxy for the actual outcome of interest
  • the outcome of interest may be influenced by previous biased decisions

It’s clear that integrating AI solutions without an ethical AI practice can result in discriminatory behavior. Take these real life examples: facial recognition software failing to recognize Black faces, and anti-Black biases in AI use related to hiring and loans.

Select Fairness and Non-discrimination Questions

    1. What is the source data, and is the training data biased? If so, in what way?
    2. Are decisions made using this AI tool more equitable than decisions made without it?
    3. Are the risks of using this solution greater than those posed by the solution we were using before?
    4. Are we ensuring diverse representation in the teams who build and/or evaluate AI systems we use or deploy?

If you answer no to question 2 and yes to question 3, reconsider whether you should use this tool for the given purpose.

Three Foundational Steps for Implementing Ethical AI Practices

The questions above are only a few of the important inquiries for developing and implementing responsible AI. Additional considerations related to the rest of the Principled AI Map themes (Human Control of Technology, Professional Responsibility, Promotion of Human Values) are crucial as well.

There are a variety of different types of issues AI presents, some new and some not so new. AI exacerbates the transparency and explainability challenges we already face with big data, and amplifies ubiquitous challenges related to areas like human rights, security, and carbon emissions (check out this tool to evaluate emissions related to ML). Regulators and industry participants also need to contend with aspects like misinformation, synthetic text, intellectual property, and generative AI’s social impacts.

Here are three enabling steps to tackle ethical AI with your team.

1. Values — Develop and implement your ethical AI value system. You don’t need to start from scratch. The Principled AI Map is a good place to look for relevant values. You can also integrate current company values or explore what other companies have implemented. Take a look at IBM’s AI ethics approach, for example.

In implementing these values, don’t forget about accountability. Having values and principles in place is great, but it’s also important to identify the folks who are responsible for ensuring these values are integrated into how the company operates every day. Use the values you align on to draft your own responsible AI use and development policy.

As teams design and implement powerful new AI tools, they should refer back to the principles that ideally guide all product, service and company building — principles related to equity, balance of power, responsible and sustainable development and human rights. Are these principles already woven into how you build and run your company? If not, developing an ethical AI practice will be more difficult.

2. Education — Incorporate ethical AI into professional development and company resources. Employees, leadership, and board members will all need support in navigating this space. Education should focus not only on the opportunities related to integrating AI, but also how to avoid relevant risks. If you have a learning and development function, you might incorporate ethical AI resources there.

Certain teams in the company will require more education than others — make sure education is prioritized for those teams who may be designing, evaluating, or implementing AI systems.

Some resources include: DAIR Institute, Responsible AI Institute, Algorithmic Justice League, webinars from Dr. Rumman Chowdhury, and Algorithm Watch.

3. Tools — Adopt tools to help your team evaluate risks and consequences. Review ethical AI canvas tools, and integrate an approach to scanning for unintended consequences when developing your products, services, and operations. Here’s an extensive roundup of frameworks, guidelines, and toolkits.

Different parts of the organization will likely need different types of tools. Think about how responsible AI practices will actually be incorporated into the way you build products now (e.g., How will you reconcile Agile development with AI safety?). Responsible AI development and use is a cross-functional effort, and it will require cross-functional leadership.

If you’re feeling overwhelmed, remember you don’t need to reinvent the wheel. Check out the frameworks and resources linked in this article, and refer to the vast amount of work that’s already occurred. Look for tips on how an ethical AI practice may evolve over time. There are ethical AI experts who have been working in this space for years — to learn more about the women who have been pioneering ethical AI development and use, see this profile.

As every aspect of our society adopts and integrates artificial intelligence, it has the potential to be incredibly disruptive in both positive and negative ways. For example, AI can substantially expand access to high-quality education and at the same time perpetuate unchecked bias in our healthcare systems. For companies, AI integration can propel marketing and product development forward, but it can also result in a variety of setbacks, including litigation due to privacy concerns.

With a critical lens applied to how, when, and why you develop and implement these tools, you can mitigate risks and orient the surge in AI innovations towards benefiting our collective future.

Related Stories

Blog

/ People & Culture

May 22, 2023

Blog

/ Best Practices

February 10, 2023

Search