-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optional fields to capture product, developer, vendor, author etc #66
Comments
Moving Owen's lists into lists so they are easier to read. There is the URL of the project but also the URL of the repository that contains the OPAT. For instance, with Drupal, it might be somewhere like https://git.drupalcode.org/project/opat-en.yaml Would also be useful to just outline how one would add optional fields if a department wanted to add additional metadata for their own internal use. |
So if we want to include issue queues, we might want to consider switching from:
to one that allows an array of links. From this the system could draw out the title or other data such as last updated date if it were available.
I'd also like to see a repository link up at the top so we can possibly update:
to include
Might also be useful to have repository_markdown & repository_html fields as optional. |
The vendor and report author are listed here - #14 (comment) @saz33m suggested that there would be a need to allow for a developer so someone could take a project like Drupal, customize it (say with a customized install profile), and then create a VPAT for that customized tool. In which case they might want to start with Drupal's VPAT, then extend it for the customization. If an external firm is involved in authoring the ACR then we'd need to allow for that too. |
testing_process fieldTalking to an accessibility manager at a bank, it would be useful to have optional notes fields to allow an author to verify how tests have been done. Automated processes are good but insufficient. A VPAT should not say that it conforms if only automated testing has been done. We can ask the author to specify what tools/approaches that they used to do the testing. This might be defined in the notes field, but if we ask for it, and specifically call for examples of manual testing in the editor it might help drive home the point. A field also makes it more comparable. Might even include an array of tools and versions used. user_impactIn a VPAT there is presently no way to measure user impact. Tools like axe identify issues as critical, serious, moderate & minor - this is one type of user impact. The other is if the errors are common or part of a key task that a user is expected to do. In order for suppliers to be able to evaluate how important an error is to their user, they need to understand its impact. EDIT: This should describe the severity and determine if it is a show-stoper or if there are workarounds. |
There is also the hash filed #69 (comment) |
I think we might want to consider how this fits with EARL, since this captures the testing_process description https://www.w3.org/TR/EARL10-Schema/#TestMode - I think ideally users are integrating EARL test results so we have more granular detail, but I can see an argument for capturing this for authors who can't or aren't using EARL. |
Would also be useful to be able to document if there is any alignment with Section 504 requirements and if there is an ETA with which a vendor is working to implement a fix for the bug that is described. |
Need to flesh out what the needs and data structure here is, but could be good to capture:
For the
The text was updated successfully, but these errors were encountered: