Skip to main content

Clifford Chance

Clifford Chance
Artificial intelligence<br />

Artificial intelligence

Talking Tech

Inclusive AI for people with disabilities: Key considerations

Artificial Intelligence 3 December 2024

Artificial Intelligence (AI) is revolutionising the way we interact with digital technology, with each other and, in some cases, with the world around us. AI-powered assistive software and tools can significantly enhance the quality of life of people with disabilities, including by empowering them to enter the workforce and operate more successfully in the workplace. From voice-activated assistants to writing support tools, AI-powered technologies are breaking down barriers and opening up new opportunities. However, AI can also create inaccessibility challenges and potential disadvantages for people with disabilities, raising questions around the equitable distribution of AI's benefits.

As we progress towards an AI-enabled future, both AI developers and deployers should consider how their AI projects could meet their equality and inclusivity obligations. 

To mark International Day of Persons with Disabilities (IDPD) 2024 we will look at how AI is empowering inclusivity and improving accessibility for disabled individuals generally before providing an overview of the global legal landscape looking at the US (federal and State levels); Europe (the EU and the UK), and APAC (Australia, Singapore and China).

AI EMPOWERING INCLUSIVITY

AI is a powerful tool that can enhance access to opportunities and enable a greater level of independence for people with various disabilities. For example:

  • Dyslexic individuals may benefit from AI-driven applications that provide writing support.
  • For visually impaired individuals, AI-powered applications can be powerful tools, with some applications being able to describe the world around their users verbally, read text aloud, recognise individuals' faces and even identify currency notes.
  • AI can provide real-time translation in sign language or text during video calls, which may assist individuals with a hearing impairment.
  • Individuals with mobility impairments can control their smart home appliances through AI virtual assistants, including automating routine tasks such as controlling the lights, door locks, and thermostats.
  • AI can also empower people with disabilities by offering other tailored support and tools that can be leveraged according to needs and preferences, such as summarising calls, emails or documents, carrying out 'plain language' text conversion and assisting with authoring through suggested text. 

Such AI solutions can help to empower people with disabilities in the workplace and in customer interfaces. It can enhance the inclusivity of those environments and their customer journeys.

INACCESSIBILITY CAUSED BY AI 

Despite the advantages that AI can offer, its use can also create accessibility challenges where the nuanced needs of people with disabilities are not properly considered, particularly at the design stage. For example, the interface to an opportunity is made through an AI tool which is not designed to be accessible to people with disabilities. Alternatively, the design team may have considered the access needs of some disabled users but not others, resulting in discrimination against those whose needs have not been taken into account.

The ethical implications of such inaccessibility can be profound. It can raise barriers for people with disabilities, or create inequitable access to important tools or opportunities. It is imperative that the need to address inaccessibility issues remains at the forefront of the development, training and deployment of AI.

Why Would AI Create Inaccessibility Issues for People with Disabilities?[1]

The following are examples of the causes where AI could raise inaccessibility issues for people with disabilities:

1. Unrepresentative dataset

At the design and training stages, machine learning algorithms normally identify patterns and regularities in the datasets being used to train them by building a mathematical model. To create an inclusive AI model, the datasets used have to be representative. In other words, the members of the concerned dataset should have characteristics representative of the class to which they belong. Otherwise, the tool will be unable to account for all segments of society.

Such unrepresentativeness could, for example, be the outcome of lack of access to sufficient data, unconscious or conscious bias in the development team, or organisational structure.

An AI model that is not developed in a manner that is mindful of the accessibility needs of different types of disability, or does not have appropriate data relating to such individuals in its dataset, may be at a higher risk of discriminating against them.

Even AI systems that are designed to improve accessibility for certain people with disabilities, in a particular identified category, such as voice recognition or predictive text, might perform poorly for some disabled users if they are not trained on datasets that include, for example, diverse speech patterns, typing abilities and other disability-related characteristics.

2. Inadequate Design and Testing 

To minimise inaccessibility issues, it is important that every stage of the AI development process, and especially the design and testing phases, are carried out with deliberate consideration of the barriers that might be faced by different users, so that these can be mitigated.

AI applications may – unbeknown to their developers - lack adequate support or knowledge of people with disabilities' needs. A common manifestation of this is prioritisation of visual content. Some AI tools feature complex graphical user interfaces and navigation structures that focus on visual aesthetics, or rely heavily on visual content without providing alternative text descriptions or audio narratives. Such content and structures can be inaccessible to visually impaired users, especially where compatibility with assistive technologies, such as screen readers, is not considered.

3. Inaccessibility to Employment Opportunities and Financial Services

In certain contexts, accessibility issues can have more significant consequences for people with disabilities. These include access to employment and finance.

As to the employment field, AI-driven aptitude or personality tests, used as part of the selection process, may potentially disadvantage disabled candidates compared with abled candidates. For example, if the test platforms are not accessible with screen readers used by visually impaired candidates.

Furthermore, AI tools are used in many aspects of the employment relationship cycle, including drafting job descriptions, filtering job applicants, drafting performance appraisals and selection for redundancy. As highlighted by regulatory guidance, such as the recent Guidance from the UK's Department for Science, Innovation and Technology for procuring and deploying AI responsibly in the HR and recruitment sector, there are numerous discrimination risks that can result from the use of AI tools in recruitment.

The implications of the use of AI in the employment field and its impact on people with disabilities will be covered in another briefing.

Accessibility of AI-interfaces is also important in a financial services context. Examples where people with disabilities could face challenges include:

  • Customer service - AI-powered customer service bots may not be programmed to understand the particular needs of disabled customers, leading to a lack of adequate support or assistance through those platforms.
  • Non-accessible content - AI-generated content, such as financial reports or statements, might not always adhere to accessibility standards, posing challenges for visually impaired users.
  • Biometric authentication - AI-powered biometric authentication systems, such as facial recognition or fingerprint scanning, may not be adaptable or accessible for people with certain physical disabilities, including those with facial disfigurement or amputees, thereby restricting them from accessing financial services with the same ease as others.

Banks and other financial institutions are advised to prioritise accessibility solutions so to address such issues. We will be covering this topic in further details in another briefing. 

THE GLOBAL LEGAL LANDSCAPE 

A variety of laws will affect an organisation's legal obligations in relation to accessibility, even if those laws may not be framed as specifically applying to AI development or use. Their application may depend on the role of the organisation in the AI development and use lifecycle, the context in which the AI will be used, and whether the relevant disability is protected under the applicable law.

Below are some spotlight examples of legal considerations in a number of jurisdictions that may be relevant, depending on the factual matrix of the AI development or use.

The United States

In the United States, there is no federal law focused on the intersection of AI and accessibility, and most states have not holistically addressed this topic in legislation. However, there are a number of federal and state efforts that address accessibility and AI or speak to AI and disability in the context of prohibiting discrimination.

Key Federal developments

On 4 October 2022, the Biden-Harris Administration's Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights, which includes a principle on algorithmic discrimination protections. This principle asserts that individuals should not face discrimination by algorithms and that systems should be designed and used equitably.

In response, the Partnership for Employment and Accessible Technology, led by the U.S. Department of Labor's Office of Disability of Employment Policy, produced the AI and Disability Toolkit, which provides guidance on implementing equitable AI in the workplace.

On 30 October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, calling on government agencies to create guardrails and legislation for safe and secure AI.  In response, the Office of Federal Contract Compliance Programs (OFCCP) developed a guide addressing AI in the Equal Employment Opportunity context. The guide discusses obligations enforced by OFCCP, but only applies to federal contractors and subcontractors and not private employers.

Other notable developments include the release by the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ) of anti-discrimination technical assistance and guidance on the Americans with Disabilities Act and employment algorithms. The EEOC also published an American Sign Language video, tips for employees and applicants, and information about assistive technology on its Job Accommodation Network. The EEOC's AI focus dates to a 2021 agency-wide initiative centered on the use of software, including AI in employment decisions.

Notably, each of the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), the DOJ and the EEOC have urged for AI systems to be developed and used in a manner that avoids discriminatory impact. On 24 April 2023, these agencies released a joint statement affirming that existing law applies to the use of automated systems and technologies, including AI, and reiterating their resolve to promote responsible innovation and protect individuals' rights. 

Similarly, in 2024, on the 34th anniversary of the Americans with Disabilities Act, representatives of the EEOC and the DOJ issued an announcement affirming these agencies' collaboration to prioritise technological equity, inclusion and accessibility through a multi-pronged approach.

Following the 2024 Presidential elections, the incoming Trump Administration is expected to usher in broad changes in both policy and law, with potentially significant implications across sectors, including AI.  It will be important to continue monitoring developments in this area. The following Clifford Chance publication might be of interest in this context: AI Pulse Check: Will the Biden Executive Order on AI Survive the Trump-Vance Administration?

It is also worth noting that the following are currently pending federal legislation:

  • Eliminating Bias in Algorithmic Systems Act of 2023 –it would require agencies that use, fund or oversee algorithms to have an office of civil rights focused on bias, discrimination and other harms that algorithms may create.
  • Algorithmic Accountability Act of 2023 –it would require companies to assess impacts of automating critical decision-making and the FTC to create regulations for assessments and reporting, and to establish an information repository.
  • AI Foundation Model Transparency Act – it would direct the FTC to set transparency standards for AI foundation model deployers by requiring them to make certain information publicly available to consumers.

In addition, the Federal Communications Commission (FCC) recently released a notice of proposed rulemaking that would impose new disclosure and consent requirements for auto-dialled marketing calls and texts that use AI to generate content.  As part of the FCC's focus on digital equity, the proposed rule would include an exemption for individuals with speech or hearing disabilities.

Key States developments

A number of states have focused on mitigating the risk of algorithmic discrimination. Examples include:

  • Colorado's recent Artificial Intelligence Act – it requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination, which includes the use of AI in a manner that results in unlawful differential treatment or impact based on disability. A "high-risk AI system" is, subject to exceptions, any AI system  "that, when deployed, makes, or is a substantial factor in making, a consequential decision."  This law is regarded as the first U.S. state level comprehensive legislation regulating AI.
  • Colorado's SB 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices - it governs the use of external consumer data (ECDIS) and information sources, as well as algorithms and predictive models that use ECDIS, in insurance practices that unfairly discriminate based on personal characteristics. They include disability.
  • New York City's Local Law 144 – it mandates bias audits of AI-enabled tools used to make employment decisions. It also requires employers to disclose to each New York City resident who applies for a position if their application is subject to an automated tool, and to allow the candidate to request an alternative evaluation process.

Other states have measures pending, including:

  • California's AB 2930 – it would prohibit employers from using automated decision systems in a discriminatory manner and require annual impact assessments and notice to individuals prior to making consequential decisions using such systems.
  • Illinois' HB 5322 – it would require deployers of automated decision tools to conduct annual impact assessments covering uses and reasonably foreseeable risks of algorithmic discrimination.

Another important source of guidance is developing from case law, including the decision in Mobley v. Workday, Inc., one of the first class-action lawsuits in the U.S. alleging discrimination through algorithmic bias in applicant screening tools. While Workday requested that the court dismiss the suit, the class action will proceed and Workday can be deemed an “agent” of the employer, sharing responsibility for hiring decisions.  If the allegations are proven, the case will have important consequences for employers and their AI vendors. 

EUROPE

The European Union

In accordance with the EU's Strategy for the Rights of Persons with Disabilities 2021-2030, accessibility – including accessibility to information and communication technologies – is considered as an enabler of rights, autonomy and equality.

Existing and developing rules around accessibility

A number of rules and requirements applying in the EU already focus on accessibility, including in relation to technologies and associated products and services. Some of these rules and requirements arise from international agreements, such as the United Nations Convention on the Rights of Persons with Disabilities. Others are specific to the EU, including the 2016 Directive on the accessibility of the websites and mobile applications of public sector bodies (EU Web Accessibility Directive) and the 2019 Directive on the accessibility requirements for products and services (EU Accessibility Act) (together, the EU Accessibility Directives). Notably, the EU Accessibility Act sets out accessibility requirements for key consumer products (such as computer hardware systems and operating systems, self-service terminals such as ATMs, and smartphones) and consumer services (electronic communications services, services providing access to audiovisual media services, banking services, e-books and dedicated software, e-commerce services, etc.) respectively, placed on the market and provided after 28 June 2025. This would be subject to specific transitional measures. The EU Accessibility Directives also provide for the elaboration of harmonised standards, compliance with which creates a presumption of conformity. Standardisation efforts in relation to the EU Accessibility Act notably are underway.

There are also some national laws and strategies on or affecting accessibility in the EU Member States.

There has been a growing focus in the EU on the accessibility of AI more specifically, including in strategic orientations and guidelines. These include the 2019 Ethics Guidelines for Trustworthy AI, developed by the independent High Level Expert Group on AI, set up by the European Commission (2019 Ethics Guidelines). Diversity, non-discrimination and fairness is one of the seven key principles in the 2019 Ethics Guidelines, translating into guidelines on accessibility and universal design, as well as stakeholder participation. Particular importance is also placed on accessibility to AI for disabled individuals.

The EU AI Act

Building on the above, the EU Artificial Intelligence Act (EU AI Act), which entered into force on 1 August 2024 and will start applying gradually, looks to reinforce accessibility in relation to AI. While some consider that the EU AI Act does not go far enough, this is already an important step in the right direction. Below is a summary of some important issues to consider:

High-risk AI - the EU AI Act places the focus on accessibility in relation to 'high-risk AI systems', including as regards the information to be provided by providers of such systems to deployers as part of the instructions for use. Further, it requires providers to ensure that their high-risk AI systems comply with accessibility requirements in accordance with the EU Accessibility Directives. Following an 'accessibility by design' approach, and as much as possible, the necessary measures should be integrated into the very design of the system.

Further, the EU AI Act requires deployers of certain types of high-risk AI systems to carry out a fundamental rights impact assessment (FRIA) before deploying such systems. The FRIA includes, amongst other things, a description of the specific categories of natural persons and groups likely to be affected in the specific context of use of the system, and the specific risks of harm likely to have an impact on those categories of persons or groups. On this, the European Disability Forum (EDF), an independent NGO that brings together representative organisations of persons with disabilities from across Europe, considers in its recent toolkit on AI and the EU AI Act that deployers (the EDF toolkit refers to developers, we assume this means deployers here) must consult with representatives of 'marginalised groups', such as  persons with disabilities. This is in order to understand how the system could affect them. "For example, if a high-risk AI system is being introduced in a public sector setting, the [deployer] should seek feedback from disability organisations to ensure the system is fair and accessible".

AI subject to specific transparency - another area in which the EU AI Act emphasises the need for compliance with accessibility requirements relates to the transparency to be provided with respect to certain specific AI systems, uses or outputs. For example, and subject to caveats:

  • AI systems intended to interact directly with natural persons, such as chatbots, will need to be designed in such a way that the persons concerned are informed that they are interacting with an AI system (unless this is obvious).
  • The outputs of a generative AI system will need to be marked in a machine-readable format and detectable as being artificially generated or manipulated.
  • Natural persons exposed to an emotion recognition system or a biometric categorisation system will need to be informed of its operation.
  • Deployers of AI systems generating or manipulating content that constitutes deepfakes will have to disclose that the content has been artificially generated or manipulated. The same will apply to deployers of AI systems generating or manipulating text published to inform the public on matters of public interest.

Under the EU AI Act, in each case, the information to be provided will need to conform to the applicable accessibility requirements.

R&D for socially beneficial AI - more generally, the EU AI Act also seeks, albeit softly, to promote research and development into socially beneficial AI, notably as regards increasing accessibility for persons with disabilities.

Voluntary initiatives - the EU AI Act encourages relevant AI operators to take certain steps on a voluntary basis. This includes (i) creating codes of conduct to extend, to non-high risk AI systems, requirements made mandatory by the EU AI Act for high-risk AI systems, and (ii) applying additional 'requirements', for instance related to the 2019 Ethics Guidelines as well as AI literacy, inclusive and diverse AI design and development with particular attention paid to vulnerable persons and accessibility for persons with disabilities, and stakeholder participation.

The AI Office (a governance body established within the European Commission and that plays a key role in the implementation of the EU AI Act, as well as in its enforcement as regards providers of general-purpose AI models) and the EU Member States are required to facilitate the drawing up of codes of conduct for the voluntary application of specific requirements to all systems, with the 2019 Ethics Guidelines serving as a basis for their preparation. Key objectives expressly called out with respect to these codes of conduct include addressing the negative impact of AI on vulnerable persons, notably, once again, in relation to accessibility for persons with a disability.

While the EU AI Act does not necessarily create a legally binding and detailed set of accessibility requirements for all AI systems across the board, the issue of accessibility is now part of the EU's AI rulebook, including rules for high-risk AI systems. Beyond, and furthering the EU's social agenda and strategy, it should be an important element of the drive for the uptake of human-centric and trustworthy AI – a key objective at the heart of the EU AI Act.

Council of Europe Framework Convention on AI

In terms of recent developments, and looking beyond the EU, the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law is also worth noting. This Framework Convention on AI, which was opened for signature on 5 September 2024 and has already been signed by various parties including the EU, the US and the UK, does not contain provisions focusing on accessibility for disabled individuals as such. It does, however, require each party, in accordance with its domestic law and applicable international obligations, to take due account of any specific needs and vulnerabilities in relation to respect for the rights of persons with disabilities. This is also something to be monitored.

The UK

In the UK, the legislative and regulatory landscape covering the inaccessibility arising from the use of AI is found in legal frameworks of wider application, in particular accessibility and equality laws. Key pieces of legislation that indirectly govern such inaccessibility issues include the Equality Act 2010 and the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018 (PSB Regs). These laws contain a wide range of legal requirements applying to websites operators, mobile apps and other digital communications, which increasingly make use of AI.

The Equality Act

The Equality Act is a comprehensive anti-discrimination law, providing protection from discrimination at work, in education and in wider society. It applies to private and public sector organisations (with some exemptions for public sector bodies).

The Act makes it unlawful to discriminate against certain groups, including people with disabilities. As well as having a range of applications in an employment context, the Act considers both physical and digital access to services, including websites, apps and documents in various formats, such as PDFs. Under the Act, organisations need to make reasonable adjustments for people with disabilities, which may include features such as captions and subtitles. This anticipatory duty regarding reasonable adjustments is subject to additional provisions for persons providing services to the public (or a section of it). Such obligations encompass websites incorporating functionality powered by AI.

The PSB Regulations

The PSB Regs specifically target digital accessibility in the public sector. They were passed to implement the EU Web Accessibility Directive and remain in force in the UK post-Brexit. They require public sector bodies to ensure that their websites and mobile applications meet similar requirements to those set out in the Web Content Accessibility Guidelines produced by the World Wide Web Consortium. Websites and mobile apps are considered accessible, and therefore compliant with the PSB Regulations, if they are "perceivable, operable, understandable and robust".

While the PSB Regs do not explicitly mention AI, they impose accessibility requirements on AI technologies as they are increasingly integrated into websites and mobile applications published by public sector bodies.

The UK government has recently announced that it intends to legislate to regulate developers of the most powerful AI models. A consultation exercise on draft legislation is expected imminently. It remains to be seen what implications any new legislation will have for accessibility issues.

APAC

In the APAC Region, there is no specific legislation targeting the accessibility of AI for people with disabilities, but there are ongoing efforts to address discrimination and enhance AI accessibility in the online space. These initiatives aim to create a more inclusive digital environment for all users.

Australia 

Australia has been proactive in regulating the online safety and security of its citizens, especially given their increased reliance on digital platforms. While not directly regulating AI, the Online Safety Act 2021 (OSA) aims to create safer online environments that could include accessibility considerations. It contains a set of Basic Online Safety Expectations (BOSE), which outline the minimum standards and obligations that online service providers must meet to ensure the safety of their users. These include providing clear and accessible reporting mechanisms, taking reasonable steps to prevent the dissemination of harmful content, and complying with requests from the eSafety Commissioner.

The BOSE requires online service providers to consider the needs and circumstances of vulnerable users, including people with disabilities, and to ensure that their services are accessible and inclusive.

The OSA also empowers the eSafety Commissioner to issue notices to online service providers to remove or restrict access to harmful content, which could include content that is inaccessible or discriminatory to people with disabilities.

In parallel, the National Disability Insurance Scheme (NDIS) in Australia has developed a framework for AI-enabled assistive technology. This framework aims to promote innovation and ensure the development of safe and effective AI-enabled assistive technologies. It focuses on six key principles: user experience, value, quality, safety, privacy and security, and human rights. The framework is intended to guide market development and support the matching of technologies to individual needs.

On a broader scale, Australia has also established AI ethics policies to guide the responsible development and implementation of AI. The Department of Industry, Science and Resources has published Australia's AI Ethics Principles, which are designed to ensure that AI is safe, secure and reliable. The principles of fairness, transparency, contestability and accountability specifically address issues, such as biased algorithms, lack of transparency and insufficient testing for accessibility. They promote equitable AI, provide clarity on AI operations, allow challenges to AI outcomes, and establish clear responsibility for AI systems' impacts. This is to ensure that AI serves the needs of the entire community, including those with disabilities.

Singapore

The Model AI Governance Framework, last updated in 2020, is a voluntary and non-binding set of principles and best practices for organisations to adopt when developing and deploying AI solutions. The Framework is based on two overarching principles; that AI should be human-centric and should be explainable, transparent and fair.

While the Framework explicitly clarifies that it does not address legal liabilities linked to AI (e.g. unequal access to AI products and services by different segments of society), it does acknowledge the broader implications these initiatives hold for the inclusion and empowerment of individuals with disabilities within the digital economy and society at large.

Interestingly, in January 2024, the Singapore Parliament noted in a motion entitled "Building an Inclusive and Safe Digital Society" that digital inclusion must embrace diversity which includes the need to ensure digital interfaces are inclusive by design.

China

A notable initiative in China is the joint statement made by China on behalf of 70 countries at the UN Human Rights Council, emphasising the need for AI to assist people with disabilities. The statement proposed to increase collaboration to reduce the digital divide, promote the high-quality development of AI (with the interests and benefits of people with disabilities being considered), and improve the convenience, accessibility and inclusiveness of AI.

On the policy front, policymakers have also indicated the necessity of aligning algorithms with ethical standards:

  • The Provisional Administrative Measures for Generative AI Services prohibit the use of AI to generate any discriminatory content or decision based on ethnicity, beliefs, nationality, region, gender, age, occupation and health.
  • The Measures for Ethical Review of Science and Technology (Trial) require AI enterprises to establish Science and Technology Ethics Committees and conduct ethical reviews. Among others, scientific and technological activities utilising person data or may otherwise pose ethical risks shall be subject to ethical reviews. During an ethical review, the design, implementation and application of algorithms shall be reviewed for fairness, transparency, reliability and controllability.

Hong Kong

The Hong Kong Model Data Protection Framework for AI issued by the Hong Kong Privacy Commissioner for Personal Data in June 2024 focuses on the ethical and responsible use of AI. This set of guidelines advocates for explainable AI, robust data privacy, and human-centric design, aiming to make AI technologies user-friendly and inclusive. It promotes the development of a responsible AI ecosystem that is accessible and trustworthy by addressing biases and ensuring continuous monitoring.

KEY CONSIDERATIONS FOR BUSINESSES

For many organisations, enabling accessibility and fair treatment for all users, employees and customers goes beyond any applicable legal requirements, being considered a moral issue as well as reputational consideration. AI developers have a crucial role to play in ensuring that AI models and systems are accessible and inclusive. Organisations deploying AI will also need to consider how they can avoid discrimination and promote accessibility, often using supplier due diligence processes as a key tool in this area. 

The following are some key points to be considered by the developers and deployers of AI:

  • What are your responsibilities regarding inclusivity? What information and/or assurances do you have regarding the inclusivity of the AI applications that you are developing or deploying that would help you to meet those responsibilities?
  • How would you ensure that a representative dataset being used in training the AI, to avoid any future inaccessibility issues?
  • What has been done to anticipate a range of needs and abilities in the prospective AI users? What adaptations could be made to prevent the creation of barriers that could exclude people with disabilities from using AI?  
  • What steps have been taken to ensure a diversity of thought and abilities in the teams designing and testing AI applications?
  • What training might be appropriate to promote understanding of the needs of people with disabilities, particularly for those involved in designing, procuring and testing AI applications?
  • How will periodic or ongoing assessment be managed in the AI development and improvement lifecycle?
  • What laws apply to the development of the concerned AI systems, and do these laws require any further actions or impose restrictions?
  • Will there be any mechanisms for capturing feedback from users, including those with disabilities?

NOTES

[1] Unrepresentative datasets and/or inappropriate design and testing can also give rise to other forms of unfair bias or discrimination in AI-assisted decision-making, beyond the accessibility issues for disabled individuals that are discussed in this article. Given the focus of this article on AI accessibility for individuals with disabilities, this article does not address other aspects of legal frameworks that relate to unfair bias and discrimination in AI development and use more generally.