From LinkedIn Learning “Career Essentials in Generative AI by Microsoft and LinkedIn” this excerpt presented by Vilas Dhar lays out the cornerstone framework for considerations in AI decision making and implementation. As we continue to advance the frontier of innovation, it’s important that we design tools that support the future we want to create – one that is equitable, sustainable, and thriving. To do this, we need to come up with new frameworks for ethical creation just as quickly. In this article, we’ll explore a three-part framework for evaluating and advising organizations on the creation of new ethically grounded AI tools.
Responsible Data Practices
The first pillar of the framework is responsible data practices. This is the starting point for all ethical AI tools. Any new technology is only as ethical as the underlying data that it’s trained on. For example, if the majority of our consumers to date have been of a particular race or gender when we train the AI on that data, we’ll continue to only design products and services that serve the needs of that population.
As you consider building or deploying any new tool, you should ask: What’s the source of the training data? What’s been done to reduce explicit and implicit bias in that dataset? How might the data we’re using perpetuate or increase historic bias? And what opportunities are there to prevent bias decision making in the future?
Well-Defined Boundaries on Safe and Appropriate Use
The second part of the framework is the importance of creating well-defined boundaries for safe and ethical uses. Any new tool or application of AI should begin with a focused statement of intention about the organization’s goals and an identification of the population that we’re trying to serve.
For example, a new generative AI tool that can write news articles could be used to help tell the stories of a wider range of underrepresented voices or it could perpetuate misinformation. When considering ethical use, you should ask: Who’s the target population for this tool? What are their main goals and incentives? And what’s the most responsible way to make sure we’re helping them achieve those goals?
Robust Transparency
The third part of the framework is robust transparency. We need to consider how transparent the recommendations of the tool are, including how traceable those outcomes are. This allows for human auditing of ethical accountability.
When it comes to transparency, you should ask: How did the tool arrive at its recommendation? Is it possible for decision makers to easily understand the inputs, analysis, outputs, and process of the tool? And have you engaged with a broad range of stakeholders to make sure that this tool promotes equity in the world?
As you embark on building and using increasingly more complicated ethical AI tools, this framework of responsible data practices, well-defined boundaries on safe and appropriate use, and robust transparency should provide you with a foundation for making smarter, more informed decisions.