Responsible AI at Pearson VUE
As technology becomes more powerful and accessible, high-stakes testing methods have evolved in parallel. For Pearson VUE, this includes the adoption of assistive capabilities that employ Artificial Intelligence (AI). We are sensitive to the valid concerns regarding some uses of AI, particularly with respect to privacy, security, and bias, and we are committed to not only to fostering equality and fairness, but also to transparency and maintaining the highest ethical standards in testing and proctoring practices.
Our promise to testing candidates and exam owners
We recognize that when using Pearson VUE systems and tools, candidates and test owners place their trust in us. That’s a responsibility we take very seriously, and it’s why we’re committed to designing, developing, and using AI with integrity. Any application of AI technology is responsibly designed to respect and protect data privacy while mitigating the risk of discrimination and fraud on behalf of both candidates and testing programs. Above all, we are committed to a human-controlled approach to the use of AI technology for test delivery purposes. The following values and principles guide our efforts to use AI responsibly, together with actionable commitments to implement AI technology wisely, ethically, and fairly.
Value | Guiding principle | Our commitment |
---|---|---|
Privacy and security | Diligently protect test-taker PII data. | We minimize personal data collection and process only what is required for its intended purpose, as described in the Pearson VUE Privacy Policy, retaining this data to the minimum extent required. Our third-party partners who provide AI services do not retain candidate data and all third parties that we collaborate with must meet stringent requirements and agree to data deletion policies before handling candidate data. We adhere to local and global data privacy and retention laws and build our systems to enable compliance with required regulations. System tests, audits, and penetration testing are standard, ongoing practices. |
Fairness and anti-bias | Minimize the potential for unfair bias and/or impermissible discriminatory decisions. | To protect the integrity of both test takers and certification/licensure programs, we use AI technology throughout the testing experience to assist human greeters and proctors with detection and notification of potential irregularities that require human evaluation, such as:
|
Accountability | Take responsibility for candidate-impacting decisions and actions. | We use AI systems and processes that prohibit automated decisions that could jeopardize a candidate’s ability to take or complete a test. AI functionality is limited to exam session observation and, when appropriate, requesting further review by human proctors, who then take necessary actions. For example, upon observing behaviors most associated with attempted fraud, such as off-screen eye movements or background noise, the AI technology will engage the assistance of a live (human) proctor to ensure these receive human contextual observation for handling and decision making. |
Transparency and governance | Apply AI best practices broadly, effectively, and consistently, with ethical and technical oversight. | At present, there are no universal ethical and technical standards for the use of AI. Until such standards are implemented, to provide strong assurance regarding AI standards and best practices, we contract independent, third-party reviews and audits across AI design, development, and operations. A governance committee will provide oversight of engagements and have the responsibility 1) to monitor operational performance data and 2) for any decisions made in respect of these practices. We welcome the opportunity to work with partners and customers who share our commitment to managing AI-based services with the utmost integrity and whose ethics align with our own. |
Last updated 2023-05-22