Import Audit Data Into an Accessibility Platform

Key takeawayAccessibility platforms accept audit data through file uploads, API connections, or manual entry. Preparing data before import keeps workflows clean.

Most accessibility platforms accept audit data through file uploads, API connections, or manual entry. The method depends on the platform’s feature set and the format your audit results arrive in. Preparing the data before import reduces cleanup and keeps your remediation workflow organized from the start.

Importing Audit Data: Key Points
Key Point What It Means
Common Formats CSV, XLSX, and JSON are the most widely supported import formats across platforms
Field Mapping Each issue needs a page URL, WCAG criterion, severity level, and description to be useful inside a platform
Scan Data vs. Audit Data Automated scan results (approximately 25% of issues) often import separately from manual audit findings
Pre-Import Cleanup Standardizing column names and removing duplicate entries before import prevents tracking confusion later

What Format Should Audit Data Be In Before Import?

Audit reports typically arrive as spreadsheets or PDFs. Platforms that support imports generally accept CSV or XLSX files. If your audit provider delivers results in PDF format, you will need to convert the data into a structured spreadsheet before uploading.

Each row in the spreadsheet should represent a single issue. Columns should include the affected page URL, the related WCAG success criterion (such as 2.1 AA), a severity or priority rating, a description of the issue, and a recommended remediation step. Some platforms also accept a screenshot or reference image column.

How Field Mapping Works During Import

When you upload a file, the platform reads your column headers and asks you to map them to its internal fields. For example, your spreadsheet column labeled “Page” might need to map to the platform’s “URL” field, and “SC” might map to “WCAG Criterion.”

Consistent naming across your audit reports makes this step faster. If you work with the same audit provider over time, request that they use a consistent column structure so each import follows the same mapping.

Importing Scan Results Alongside Audit Findings

Automated scans and manual audits produce different types of data. Scans flag approximately 25% of issues and typically export in structured formats that platforms ingest directly. Audit findings cover the remaining issues that require human evaluation, and they often need more descriptive fields.

Many platforms keep scan results and audit results in the same issue tracker but tag them by source. This distinction matters for reporting. Knowing which issues came from a scan versus a manual evaluation helps teams prioritize remediation and understand where automated monitoring can track fixes over time.

Steps to Prepare Data Before Importing

Before uploading, review the spreadsheet for duplicate entries. Audits sometimes document the same issue on multiple pages, and importing duplicates creates clutter in the tracking system.

Confirm that every row has a value in the required fields. Missing page URLs or blank severity ratings will either cause import errors or create incomplete records. Standardize how WCAG criteria are referenced throughout the file, using the same format (for example, “1.1.1” rather than mixing “1.1.1” with “Non-text Content”).

What Happens After the Import

Once data is inside the platform, each issue becomes a trackable item. Teams can assign issues to developers, set deadlines, and monitor progress through dashboards. The value of a clean import shows up here: well-structured data means accurate filtering, reporting, and prioritization from day one.

Platforms that support user impact and risk scoring can apply those ratings automatically if the imported data includes severity fields. Without severity data, teams may need to score each issue after import.

Getting audit data into a platform accurately sets the foundation for every remediation decision that follows.