Navigation

Service-Menü

Sprachversionen



Inhalt

Articles

Testing methodologies
A snapshot review of web accessibility testing approaches in Europe (update 1 June 2010)

12.04.2010

What is the current state of web accessibility testing approaches applied across Europe? We sent seventeen questions to a number of organisations running accessibility tests. Here are their replies.

Author: Detlev Fischer, Manager BIK Test Development

Introduction

This short comparative review has now received answers from nine European accessibility test procedures.

The answers from Drempelvrij were added on 3 May 2010 and the answers of Sensus were added on 1 June 2010.

The German BIK project which maintains the BITV-Test decided to conduct this review to get a better picture of the state of other approaches applied in Europe, particularly at a point where we are facing a major revision of our own German test procedure to bring it in line with the expected revision of the German Federal Directive, BITV (a revision which will closely mirror WCAG 2.0).

This review just aims to provide a comparative snapshot by listing in alphabetical order the replies of each organisation to the seventeen questions we came up with. Of course, we have also included answers regarding our own test procedure. Up to now, the survey includes nine organisations:

It should be noted that the answers given by RNIB describe the methodology used in a survey of the level of conformance of UK local authorities' websites to WCAG 1.0, part of the annual 'Better Connected' report published by the Society of Information Technology Managers (SOCITM). The usual approach followed is more pragmatic and does not involve spot-checks of sites against checklists.

If you are based in Europe, conduct accessibility tests and want to be included in this review, just send us answers to the seventeen questions below, and we put you in.

Overview of the seventeen questions and the replies

First, we wanted to know how the various test procedures relate to WCAG and UWEM, and how they have responded (or will respond) to the release of WCAG 2.0. Further questions then home in on some methodological choices. Is the procedure documented, and if so, is this information public? Is there a pre-audit of sites tested, and how is the test sample chosen? Are test steps detailed on the level of tools? What rating scheme is used? What skill level is required for testers? Does the procedure include an element of quality assurance? Are test reports publicly accessible? Is a quality mark or seal issued? And is this seal issued for a limited period only?

(1) Is there a reference or a clear mapping to WCAG checkpoints or the UWEM methodology?

Access-for-all

Our test protocol uses exactly WCAG 1.0 (the AA and 10 AAA checkpoints according to P028) and, starting from September 2009, WCAG 2.0 (A, AA, AAA (optional)) based on translations to German.

AccessiWeb

The mapping to WCAG 1.0 checkpoints can be found at: http://www.accessiweb.org/fr/guide_accessiweb/edit_table_awv11_fr_wcag10_en.html

The mapping to UWEM is available at: http://www.accessiweb.org/fr/Label_Accessibilite/accessiweb_uwem1/

Anysurfer

Yes.

BIK

The BITV-Test checkpoints map to the German federal directive BITV 1 which in turn closely follows WCAG 1.0. The levels applied in testing are A and AA (which equals Priority 1 in the BITV) but some triple-A checkpoints have been included. BITV however is fairly general so often the checkpoints operationalise BITV requirements to make them testable.

Drempelvrij

Yes.

In the Netherlands, accessibility inspections are carried out under the auspices of the Quality Mark Drempelvrij.nl Foundation. Inspection is based on a publicly available normative document [1] that contains all level A and level AA checkpoints from WCAG 1.0, and some level AAA checkpoints. The Unified Web Evaluation Methodology (UWEM) was used while writing the normative document.

The normative document is based on the Web Guidelines [2], a quality model for websites that is developed for the Netherlands government. WCAG 1.0 is integrated in the Web Guidelines. Application of the Web Guidelines is mandatory for new websites of central government organisations since September 2006 [3] and for new websites of municipalities, water boards and provinces since December 2008, as part of the National Implementation Programme (NUP) [4].

  1. Normative document Web Guidelines - Success criteria (PDF) and Normative document Web Guidelines - caesura and guidance on the sampling (PDF)
  2. http://www.webguidelines.nl
  3. Ministerial decision on the quality of government websites
  4. National Implementation Programme
Nomensa

Our accessibility audits use WCAG as a basis. As the globally recognised benchmark, from which many other standards are derived, we believe WCAG to be the most robust and multi-disability option.

RNIB ('Better Connected' report)

We use the full WCAG A and AA checkpoint list and for each checkpoint we spot check across the site. That means that we use all the checkpoints, but not always the same pages. If we come across a page that can be used as an example of failure for more than a checkpoint, we use it more than once, but still try to find more instances. In some case before we fail a checkpoint, we try to find several issues, i.e. we only fail for missing ALT attributes if we come across more than 5 informative or functional images missing the ALT attribute in main areas of the site, or file for validation only if we find at least 5 pages having more than 50 HTML errors.

Sensus

Yes. Our evaluations are carried out according to the WCAG 2.0 success criteria, level AA and in some cases level AAA.

Technosite

Yes, in Spanish [1] but is mostly numbers so it is easy to follow.

(2) What is the status of the approach regarding WCAG 2.0? Is the evaluation procedure being overhauled to reflect changing requirements?

Access-for-all

Yes, since June 2009 we have been working to operationalize WCAG 2.0, i.e. to express the success criteria as test steps setting out what we demand and how we test. Since September 2009 we have been testing according to WCAG 2.0 and we continuously refine the procedure regarding recommended techniques.

AccessiWeb

We are actually working on a version 2.0 of AccessiWeb in order to comply with WCAG 2.0. We expect to have this work finished in December [2009]. Anysurfer Yes, by the end of 2009 we hope to have updated guidelines, test procedure and labeling process.

BIK

The BITV-Test is currently being revised to be in line with the German federal directive BITV 2 (expected to be released in summer 2010), which closely follows WCAG 2.0. The revision process is therefore guided by WCAG 2.0.

This is not a straightforward matter since some WCAG 2.0 success criteria cover a lot of ground: 1.3.1 'Info and Relationships', for example, maps onto 7 of our established checkpoints. So there will be one-to-one, one-to-many and a few  many-to-one mappings between WCAG 2.0 and the new BITV-Test.

Drempelvrij

A new version of the Dutch Web Guidelines is currently under development. WCAG 2.0 will be included in the new version 'as is': without any textual changes and all three conformance levels. This was requested in a motion by the Dutch parliament:

The Chamber […] requests the Dutch government […] her accessibility guidelines to conform with current and future guidelines formulated by the International World Wide Web Consortium (W3C)

Web guidelines from the current version that are not covered by WCAG 2.0 will be reformulated according to the WCAG 2.0 methodology.

Nomensa

We began preparing a new range of WCAG 2.0 based services before the new guidelines were released. We launched them earlier this year, and now offer audits using WCAG 1.0, 2.0 and Section 508 amongst others.

RNIB ('Better Connected' report)

We agreed with SOCITM to use WCAG 1.0 for this year and exactly the same methodology used in the past. This mainly for two reasons: A) UK government guidelines still refer to WCAG 1.0 as benchmark for the public sector, B) we didn't feel fair to change to WCAG 2.0 without giving website owners enough time to be prepared. Most likely next year we'll use WCAG 2.0.

Sensus

Yes. We have been testing for conformance of WCAG 2.0 since January 2009.

Technosite

Yes, we are continuing to develop our own proprietary methodology for WCAG 2.0, but work on harmonizing under the auspices of UWEM is just beginning.

(3) Is a pre-audit carried out to filter sites before they are evaluated?

Access-for-all

After a short review which establishes the testing level (small, medium, or large) we recommend, for sites with a significant need for overhaul, a prior expert review, which is carried out according to our checklist Accessibility 1.0.

AccessiWeb

Yes. We ask people to check 8 criteria for which all tests can be done by a tool. They have to test the homepage of the site and answer yes or no to each criterion. Explanations in French about this pre evaluation can be found at:
http://www.accessiweb.org/fr/Label_Accessibilite/pre_audit/

Anysurfer

Not at the moment. We only have a very basic list of basic requirements such as "the site is not fully Flash".

BIK

Sites are inspected to ensure they meet basic accessibility requirements and are therefore worthwhile testing. A version of the BITV-Test is offered as design support test (with one tester) to prepare sites for the full BITV-Test (two independent testers).

Drempelvrij

Not formally. However, an automated instrument is available that checks 47 of 125 web guidelines. This instrument is publicly available and is also used to carry out informal pre-audits by website makers and website owners.

Nomensa

Yes, this forms part of our sales process. We complete a brief assessment of a website before we provide a quote, so we can offer an appropriate and tailored service.

RNIB ('Better Connected' report)

The sites are selected by SOCITM and are usually almost all the local authorities websites - this year around 430. The initial screening is done using an automated tool. Only sites that meet the initial requirements are taken forward to the manual testing for at least Single-A. A couple of examples of criteria: number of missing ALT more than 5% of all the images found and number of FRAMESET elements which are found to lack a NOFRAMES element is more than 1.

Sensus

It depends on the service provided. We rarely carry out pre-audits but we often carry out services through the development process. Therefore we help clients to focus on accessibility in the planning fase. We usually test in at least three fases:

  1. Design: Mock Ups/Wireframes
  2. when basic structure is implemented and
  3. finally when some – or all - editorial content has been added.
Technosite

Yes, we produce a very short report for internal use to inform our approach to the client and our recommended course of action, to help us decide whether to recommend them to proceed or not.

(4) Is there a documentation of test steps?

Access-for-all

Up to now, this existed internally for WCAG 1, for WCAG 2 a public documentation is planned (we like BIK’s approach in this respect). We use test procedures for blind screenreader users, visually impaired users and users with motor deficiencies and consequently author a test report including a failure analysis. Test steps are continuously revised according to the developments of relevant technologies. This is one reason to hesitate publishing the evaluation criteria, but we plan to go ahead nevertheless.

AccessiWeb

The AccessiWeb detailed list (liste déployée) gives tests to verify each criterion:
http://www.accessiweb.org/fr/guide_accessiweb/edit_table_awv11_fr_awv11_deploye_fr.html

Anysurfer

No, some guidelines explanations contain suggested ways of testing but most of them don't.

BIK

All test steps are publicly documented in detail (in German) on the BITV-Test web site, including assessment guidance, examples of past assessments, and a section with frequently asked questions.

Drempelvrij

At present, there are two organisations that are allowed to carry out formal inspections. Both organisations have their own documentation, based on the normative document, in which the inspection procedure is described. This documentation is not publicly available. The documentation and steps have been calibrated by the Dutch Accreditation Council.

Nomensa

Yes, we have extensive internal documentation based on the publically available WCAG 2.0 documentation.

RNIB ('Better Connected' report)

The See it Right guidelines are public. The document includes some general guidance for each checkpoint. Additional advice is available on our website and to clients we provide more in-depth support.

Sensus

Yes, we always present clients with an overview of the methodology of the tests. This overview is also given on our website. In some projects we give a detailed description of tests steps, if requested.

Technosite

Yes, but it is not publicly available.

(5)  Is there documentation on the required qualification of testers?

Access-for-all

No, we are too small for hat, but we extensively train testers and are already offering for the second time a training position for a blind person who is set to become an IT expert, accessibility specialist, and ECDL-BF-examiner (European Computer Driving Licence).

AccessiWeb

We train future experts to review the accessibility of a Web site according to the AccessiWeb methodology. At the end of the training the have to pass an examination. We deliver a paper certifying that they are able to evaluate a Web site according to our methodology. The list of our experts can be found at:
www.accessiweb.org/fr/groupe_travail_accessibilite_du_web/experts/

Anysurfer

No, currently testing is always performed under auspices of one of our accessibility experts.

BIK

We document testers’ assessment decisions, which can be used as an indication of the competence achieved, but this has not yet been fully formalized and applied. The comparative benchmarks are generated based on the deviation of testers’ individual ranking decisions from the harmonized and quality-controlled final ranking.

Drempelvrij

Inspections are carries out under accreditation, according to ISO/IEC-17020 (General criteria for the operation of various types of bodies performing inspection). Accreditation is carried out by the Dutch Accreditation Council (RvA). RvA examines the qualifications of the organisation, not of individual inspectors. As part of the accreditation process, inspectors are observed while at work.

Nomensa

Yes, our testers are all experienced members of our web development team. The requisite qualifications and requirements form part of their job description.

RNIB ('Better Connected' report)

Not really. All testers are RNIB Web Accessibility Consultants, with a strong background in this field.

Sensus
Sensus

Yes, for larger projects there is.

Technosite

No. All testers are Technosite staff members and are selected on the basis of our experience.

(6) Is there an element of verification / quality control of tests carried out, especially manual tests? If so, how is it documented?

Access-for-all

Each test report has to pass a review. One member of staff authors it, the other conducts the review, and vice versa. We apply a four-eyes principle between the three leading members staff. We would like to organize this more efficiently but have not found a good solution so far - the test cases are just too different.

AccessiWeb

Our guide "guide AccessiWeb" explains how to test each criteria with the tools "source code", "Web Accessibility toolbar and web developer toolbar). On the following page, you can see an explanation card for the first criterion of the methodology, to evaluate the availability of text alternatives.
http://www.accessiweb.org/fr/guide_accessiweb/guide-accessiweb-fiche-1-1.html
See also for criterion 1.3 that requires human judgement:
http://www.accessiweb.org/fr/guide_accessiweb/guide-accessiweb-fiche-1-3.html

Anysurfer

At least two evaluators check the website independently.

BIK

The full BITV-Test is a tandem test. The ranking decisions and comments of two independent testers are recorded in the testing application and harmonized in an arbitration phase after both testers have concluded their individual tests. In addition, especially with less experienced testers, senior staff will carry out a quality assurance which has access to both testers’ original and harmonized ranking decisions and comments. Quality assurance comments and ranking changes are worked into the final test report.

Drempelvrij

This is one of the accreditation requirements. It is documented according to the requirements described in ISO/IEC-17020. The inspection process is formally described and publicly available.

Nomensa

Yes, we are an ISO 9001 accredited company. All our processes are covered by our quality documentation.

RNIB ('Better Connected' report)

The automated testing has been used in the last few years, before my time, and it's quite robust. For the manual testing, we cross-check about 10% of the reports between members of the team.

Sensus

We never rely solely on results from automated validations. So our evaluations are mainly based on manual testing. The results and the report always go through a quality assurance process, carried out by another expert from the company.

Technosite

Internally reports are reviewed by a supervisor, but the process is not formalized or documented.

(7) Is there a documentation of the tests carried out (public or confidential) including the ranking decisions per checkpoint?

Access-for-all

Yes, the test report is delivered to the client, each success criteria is either „pass“ or „fail“ or „not applicable“. Until certification, this report is reworked in three iterations. To meet a certain level of certification, all criteria on the respective level have to be met. The final test report constitutes the authoritative assessment (WCAG A, AA, or AA+). The reports and assessments are not published.

AccessiWeb

For each web site we have certified, there is a review report. You can view one for example at:
www.accessiweb.org/fr/Label_Accessibilite/galerie_sites_web_accessibles/groupama-com/

Anysurfer

No.

BIK

All final test reports of published tests are publicly available. They include all harmonized rankings and comments for each of the 52 test steps on all pages tested. Clients may chose not to publish the test report, but publishing and linking to the report is a requirement for sites wanting to carry BIK’s  90plus label.

Drempelvrij

Detailed reports of the findings are provided to website owners and the management of drempelvrij.nl.

Nomensa

Yes, our internal documentation is based on the success criteria and conformance requirements for WCAG.

RNIB ('Better Connected' report)

The spreadsheet we use to report has pages that are not sent to SOCITM, but include a list of potential issues for each checkpoint and additional guidance for testers (e.g. "Pass if: NOFRAMES content duplicates the framed content or if it contains links to the framed content so content can be accessed outside the FRAMESET."). We also have a methodology document which is internal. It contains additional instructions such as:

5.1 - Simple Data Tables

  1. Check council tax tables, recycling centres etc. Data tables that are 4 or more columns and 3 or more rows require both row and column headings. Fail if these are not present. 
  2. For smaller tables, we require row headers only to pass.
  3. If the table is more than 2 years old then ignore it.
Sensus

Yes there is extensive documentation given for all tests carried out by Sensus. Depending on the client these are eather confidential or public. Ranking decisions might be given in some cases.

Technosite

Yes, the tests are documented. The document is private. Ranking follows the UWEM (and therefore WCAG) priorities. Our label also includes conformance with the Spanish UNE 139803:2004 standard [2]. This standard is based on WCAG 1.0 but three checkpoints 4.3, 5.5 and 9.4 have different (higher) priorities, 1, 2 and 2 respectively. So we check those too.

(8) Are there instructions as to the required sample size for web site testing (number of pages, mandatory inclusions of pages).

Access-for-all

No. The client has the option to present a list with all pages to be tested. In other cases we proceed through sampling pages ourselves. Our blind testers are quite skilled in discovering accessibility problems even in very large sites. 

AccessiWeb

We review 10 pages representing each type of page in the Web site. Nevertheless we may have a look at any other page of the site during the review.
We review the home page, help or accessibility page, site map, search results page (if available), pages containing other technologies (such as flash, scripts...), (image maps if available, with forms, with data table, page with files to download.
All details on the page selection are available at:
http://www.accessiweb.org/fr/Label_Accessibilite/evaluation/#quoi

Anysurfer

Yes, a sample is about 25 pages of each language version. It always includes the homepage, the contact page, the search tool and the sitemap if applicable.

BIK

The first tester chooses an adequate page sample. The minimum is three pages, usually including start page, contact page, and one or content or content or section overview pages. The number of pages chosen the sample depends on the number of different templates and content types used by the site.

Drempelvrij

Yes. How the sample must be made is described in detail in the 'Caesura & Sampling document'.

Nomensa

Yes, we have guidelines that help us choose an appropriate and representative sample of pages for testing.

RNIB ('Better Connected' report)

No, but we agreed to always look for sections that are more likely to be accessed by the public, such as the search functionality, the council tax information, etc.

Sensus

For some projects, yes, to a certain extent. We have specified a scope that covers page types and a number of pages within each category.

Technosite

Yes, but it is not public. We do not necessarily follow the UWEM sampling procedure.

(9) In manual checks, are there detailed instructions per checkpoint referring to test tools such as browser web accessibility bars?

Access-for-all

Yes and no. Of course we mandate tools for individual test steps (Validator, Color Contrast Analyzer), and more tools are discussed informally – these tools are subject to constant changes.

AccessiWeb

Yes, see answer to question 6. We have created evaluation guide for Web accessibility toolbar, Web developer and  code source and an old one for evaluating with Jaws. One example is at:
http://www.accessiweb.org/fr/groupe_travail_accessibilite_du_web/manuel_accessiweb/

Anysurfer

No.

BIK

Yes. The publicly documented test steps detail browser, version, browser toolbar, and other evaluation tools to be used in each test step. Checks are described step-by-step, including assistance in interpreting the results. In many cases, test steps are carried out in the two dominant browsers, Firefox and Internet Explorer. The tool page specifies the current selection of tools for testing, which is regularly updated.

Drempelvrij

UWEM is used for that purpose. According to ISO/IEC-17020, all browsers test tools must be described, tested and calibrated before they can be used in the inspection process.

Nomensa

Yes, our internal documentation recommends the tools that should be used for each checkpoint. Each tester is given a toolkit of approved tools when they join the company, and this toolkit is continuously reviewed and updated.

RNIB ('Better Connected' report)

Not for all checkpoints/issues. We use the same tools we normally use for other work, but they are not documented within this project.

Sensus

We always give a list of tools used for the test carried out, but not specifically for each success criterion, in all cases.

Technosite

Not formally, although we suggest which tools to use.

(10) Is there information on the share of automatic and manual (expert) testing and the way they may be used in combination?

Access-for-all

This also works through informal communication – the applicability of automatic test tools is rather limited, and testers tend to have individual preferences.

AccessiWeb

It is not specifically indicated in the methodology, but during the training we teach people how they can use automatic tool and the results of those tool to check each criterion.

Anysurfer

We don't perform automatic testing.

BIK

Apart from using the W3C Validator to check for code validity and the automated functions provided in the browser accessibility toolbars and a few bookmarklets, the BITV-Test does not use automated tests.

Drempelvrij

The Netherlands' government has an automated tool that can reliably measure 47 of 125 web guidelines. The publicly available version of this tool is much used. Besides that, the tool is used as a monitoring instrument by the government. At present, 511 government websites are tested monthly, using a random sample of 20 pages.

An important issue is that automated instruments by nature are not suitable for demonstrating that websites meet accessibility requirements. For that reason, the 511 websites are examined for accessibility or web guidelines conformance claims once a year. When claims are found, the researchers ask the website owner for evidence that a manual inspection is carried out for the requirements that cannot be reliably tested with an automated tool. An inspection report is a suitable piece of proof that can be examined for confirmation that the claim is true.

Since October 2008, when the examination started, only claims could be confirmed that were based on a formal inspection by an accredited organisation. In all other cases, the self-declarations of conformity (SDoC's) could not be confirmed by an inspection report. As a result, follow-up research on the correctness of all SDoC's that were found could not be carried out.

The findings justify the conclusion that until now, SDoC's have no practical value whatsoever. This does not mean that SDoC's are useless by default. But the situation can only change when an SDoC system becomes available for websites which clearly and unambiguously demonstrates that requirements are fulfilled. Such a system for websites, preferably based on ISO/IEC-17050, currently appears not to exist.

Combination of automatic and manual testing

The automated tool has proven to be a very effective instrument in demonstrating that requirements are not fulfilled. As such, automatic testing used as a falsification instrument, in combination with audits for manual inspection, seems to be a suitable method for large-scale monitoring of accessibility and web interface quality.

Nomensa

Yes, our internal documentation covers this. It is also part of the core training we provide for all our developers.

RNIB ('Better Connected' report)

The impact of the automated test is very marginal, but essential for the initial screening.

Sensus

Automated tools are only used by our experts to support the manual process. Therefore they are considered to be of small significance in the overall tests. It is stated in the reports that both automated and manual tests are carried out.

Technosite

For labelling all testing is basically manual, although we do use automated tools for initial evaluation to detect problems, and also for periodic checks after a label is issued (at 3, 9, 15 months).

11) Is testing browser-based only, or is assistive technology or other user agents (e.g. mobile phones or PDAs) used during evaluation?

Access-for-all

See http://www.access-for-all.ch/ch/zertifizierung/rahmenbedingungen/testumgebung.html This approach and this environment we use also for accessibility tests without certification. Other environments (e.g. IE6 in intranet contexts) are also feasible.

AccessiWeb

We use assistive technology and browsers.

Anysurfer

Screenreaders and magnification software are used.

BIK

The BITV-Test has chosen to be easy to use by not requiring tests with assistive technologies. Such tests, however, are used during test development to make sure that the specified checks are valid for users of assistive technologies.

Drempelvrij

Inspection is done against the normative document. Assistive technology hardware and software products, user agents or devices do not have a 'reference' status, but may be used while performing inspections. The described situation may change for web guidelines version 2.

Nomensa

We usually test with browsers. We also use screening to carry out basic assessments with access technologies and for keyboard only interactions. Our approach is flexible though. We can also test on different platforms, and with different technologies if needed. We also provide a complimentary testing service with people with disabilities. This service is often provided in tandem with a technical accessibility audit.

RNIB ('Better Connected' report)

We always use a combination of browser (complemented with toolbars) and assistive technology testing (mostly JAWS and ZoomText). We support our testing with some toolbars too.

Sensus

It depends on the project. In many of our projects an expert evaluation is supplemented by a number of user tests with users of assistive technologies. Most often these are users with visual impairments, physical impairments and dyslexic users. If user testing is not part of a project the website will always be tested to some extent by a screen reader user.

Technosite

We mainly use browser-based tools. We use assistive technology and (on request) real mobile devices to support our findings, but not as the basis of tests. Some of our testing staff have disabilities that require them to use AT.

(12) Is the method of assessment or ranking documented / transparent?

Access-for-all

During certification the test reports are revised several times and document the progress and the results. This report is sent to the client and is not published. Only the final verdict is published together with the label issued.

AccessiWeb

All evaluation reports are public. You can view the list of certified web sites at:
http://www.accessiweb.org/fr/Label_Accessibilite/galerie_sites_web_accessibles/

Anysurfer

I don't understand this question. Of course we document it when a check is not ok.

BIK

The testing and ranking methodology is explained in detail in the test documentation (in German). Examples of ranking decisions in other tests are available to help verify (or contest) the adequacy of a ranking decision in any individual test.

Drempelvrij

Yes. The required documents that contain the methodology are available without restriction. The results of assessments are added to a register.

Nomensa

Yes, it is pubcally available as part of the WCAG methodology.

RNIB ('Better Connected' report)

This is very specific to this test [of UK local authorities web sites]. It was introduced mainly to stress the difference between conformance testing and functional accessibility.

Sensus

The method is always documented and transparent in the sense that is refers to conformance requirements in WCAG 2.0.

Technosite

It is documented, but not public.

(13) Are usability aspects included in accessibility testing?

Access-for-all

Yes, in some areas the assessment touches on usability issues. We list usability actions as recommendations but implementation is not required.

AccessiWeb

We evaluate accessibility and some usability criteria may be included in them but not a priority.

Anysurfer

We try not to mix them in accessibility testing although sometimes it is a thin line. We want our guidelines to be as objective as possible. However, at each check we can include remarks that are not mandatory for the quality label. These generally are usability-related.

BIK

The BITV-Test focuses on accessibility. Some checks (e.g. meaningful semantic structure) touch on usability but this is not our concern.

Drempelvrij

Yes, but only the usability aspects that are described in the normative document.

Nomensa

We test using WCAG, so usability tends not to be included specifically. We do include usability best practice when we provide code solutions for addressing any issues identified during an audit however.
Usability comes more strongly into play when we provide testing with people with disabilities. This tends to highlight issues identified during the audit, but with a more usability focused perspective.

RNIB ('Better Connected' report)

They only marginally affect the ranking. Usability is part of the Better Connected report but assessed by SOCITM consultants.

Sensus

Normally not. In certain projects we work with usability agencies to meet the needs of both accessibility testing and usability testing.

Technosite

No. We try to base our recommended solutions on usability criteria. We provide usability testing as a separate service but at present there is no label.

(14) What levels of compliance are provided to sites tested? Do they map onto other schemes (such as the A - AA - AAA of the WCAG)?

Access-for-all

A, AA, AA+ The AA+ refers to the Swiss guidelines for accessible web content which refer to all of WCAG 2.0 AA and the useful WCAG 2.0 AAA-criteria. See P028, the Swiss federal guidelines for designing  accessible web sites.

AccessiWeb

We have level bronze, silver and gold each corresponding to level A, AA and AAA of WCAG 1.0. See the mapping mentioned above.

Anysurfer

A site with the AnySurfer label does not necessary comply with WCAG1.0 as that one is out of date. After the review of the AnySurfer guidelines, all WCAG level A criteria will be incorporated. A site with the AnySurfer-label will then also meet WCAG level A.

BIK

The level of compliance used is BITV Prio 1, which equals WCAG 1.0  A and AA levels.

Drempelvrij

Compliance levels that websites currently can be inspected against:

  • WCAG 1.0 level A
  • WCAG 1.0 level A and level AA
  • Web Guidelines version 1.3, which includes WCAG 1.0 level A and level AA
Nomensa

Yes, those are the conformance models we use.

RNIB ('Better Connected' report)

We don't provide any compliance statement or anything like that. We only assess their conformance to Single-A and Double-A.

Sensus

We always test for WCAG 2.0, AA compliance and in some projects (depending on customer needs) AAA compliance.

Technosite

We issue labels for WCAG A and AA (as covered by UWEM and Euracert). We offer a AAA label but nobody has asked for it as yet.

(15) Are there elements that ensure the sustainability of results, such as requirements for repeat checks or other linked activities, e.g. for training online editors?

Access-for-all

Yes, certification is valid for 2 years only, then re-testing is mandatory. There is a contractual obligation to inform us about major changes to the site and re-test in order to keep the certification.

AccessiWeb

When we deliver the AccessiWeb logo we evaluate the site periodically to ensure that they still deserve the logo. I am not sure I understand the question correctly.

Anysurfer

Training is optional. We only perform sample checks on labeled websites. We do not have the manpower to check every site every year. If an accessibility problem is found on a labeled website it will be reported. If they don't correct it, they lose the label.

BIK

No. The requirement for sites using the 90plus or 95plus seals however is to link to the test report, which carries the date of testing. There is no scheme for regular re-tests.

Drempelvrij

Several organisations offer web guidelines and accessibility training programs.

Since 2009, courses on organisational and editorial aspects of compliance are targeted towards project leaders, managers and editors of municipalities, water boards and provinces. These courses are provided under the 'acceleration agenda' of the National Implementation Programme (NUP).

To accelerate compliance, a co-operation is planned with software makers and implementers that provide services to government. The availability of a user friendly web editor that can produce high quality, full web guidelines compliant content is recognised as a key factor for sustainable success. An editor that meets the requirements already exists. The possibilities for releasing this editor under an open source licence is currently being examined, since it may very well aid in successful deployment of the web guidelines quality model.

Nomensa

We include a technical workshop as part of the service, and we often include a retest as part of the package. We also provide public and in-house training master classes in all aspects of accessibility.

RNIB ('Better Connected' report)

No, this is only an annual report. Some of the sites however contact us after the report and ask us to provide more information and in some cases do some paid work with them to improve their level of accessibility.

Sensus

Yes. We often give training sessions with project groups as well as web editors. We also suggest a strategy of repeat checks.

Technosite

We provide training but it is quite separate from the labelling process. After issuing a label we carry out checks quarterly, alternating manual checks of key aspects on a reduced sample, and automated checks across the entire site.

(16) Is the method linked to some sort of quality seal? If so, is the issuing of the seal limited to a particular period or tied to activities aimed at ensuring sustainability?

Access-for-all

The label carries the year of issuing and thereby provides an indication as to the date of issuance. Certified web sites:
www.access-for-all.ch/ch/zertifizierung/zertifizierte-websites.html

AccessiWeb

We have the seal AccessiWeb which is delivered for a two year period.

Anysurfer

See above.

BIK

Sites reaching 90 (of 100 possible) points or better can carry the 90plus logo which links to the test report. Sites reaching 95 or more points can carry the 95plus logo (which mist also be  linked to the report). Beyond the date of the test report, there is no scheme in place to check whether the site continues to be accessible, and no mandatory activities for site owners to ensure sustained accessibility of their site.

Drempelvrij

Yes. Organizations that are entitled to carry out inspections for the Quality Mark Drempelvrij.nl Foundation are allowed to hand out quality marks. These quality marks are valid for one year.

Nomensa

No, we are not affiliated with a wider seal or scheme.

RNIB ('Better Connected' report)

See 15 above.

Sensus

No. A good strategy for quality sealing does not exist in Denmark.

Technosite

Yes, we issue a label (Technosite+Euracert A an AA). We carry out periodic checks as described in the previous answer.

(17) Is the approach publicly funded or offered as a commercial service (or is it a mix of both)?

Access-for-all

It is a mix. Certification is offered as a commercial service. The high degree of manual examination means that the price does not cover the true costs. After all, the certification tests have to be taken very seriously. It is still very difficult for people with disabilities to find a job and earn enough money for a living. The trust “Access-for-all” is committed to this integrative principle and will continue to adhere to it. The foundation “access-for-all” was founded by the blind IT specialist Arnold Schneider.

Anysurfer

AnySurfer receives no government funding. It is a project of a non-profit organisation.

AccessiWeb

It is a commercial service.

Nomensa

It is a commercial service.

BIK

The BIK project is a government-funded scheme. The project covers test development, some self-selected tests of important sites, and quality assurance of tests. Apart from that, BITV-Tests are offered as commercial service, often in tandem with other organizations that have qualified testers.

Drempelvrij

Accessibility assessment is offered as a commercial service and is monitored by an independent non-profit organisation, the Quality Mark drempelvrij.nl Foundation. The Netherlands' government actively participates in the development of formal specifications and supporting tools, and co-funds activities (development of the normative document, automated test tool, scientific research).

RNIB ('Better Connected' report)

Good question. I don't actually know. We should ask this to SOCITM. You might find more info on their site: http://www.socitm.gov.uk/socitm/.

Sensus

It is a commercial service.

Technosite

The label and the consultancy work are commercial services.

Short excursion:  the results of CEN/BT WG 185

The CEN/BT WG 185 Project Team Final Report for Approval on European accessibility requirements for public procurement of products and services in the ICT domain (PDF) (European Commission Mandate M 376, Phase 1)   included an “analysis of existing conformity assessment systems and schemes” (pp.29) which contains data for a eight different schemes:

  • AENOR (ES)
  • Drempelvrij (NL)
  • BITV-Test (DE)
  • Segala (IE)
  • Euracert (EU)
  • PubliAccesso (IT)
  • See it right (UK)
  • TCO Development (SE)
  • VPAT (US)

The information provided is organised into a short overview text and a data table covering aspects such as method of determination, surveillance, complaint system and type of party performing the attestation (a dimension that we have not covered here).

This rather formal exercise can be taken as a starting point; in our view, however, the data assembled do not really help in appreciating the particular accessibility testing approach chosen. For this, we need to look much more closely at the methodology: How are tests carried out? Who defines the sample? How are results validated, processed in the ranking scheme, and aggregated into a final score or verdict?

Conclusion

While falling short of an in-depth review of the state of accessibility testing in Europe, we hope that simply by comparing the answers of the organisations to our questions readers will have gained some insight into the differences of approach. Each of the accessibility testing schemes has its own development history with its peculiarities and differing commitments. These are partly due to differences in national regulations, but the unanimous reference to WCAG 2.0 shows that the basic set of criteria has now been firmly established.

Many of the WCAG 2.0 techniques end with tests that detail procedure and expected results (pass or fail), but these tests can be no more than atoms in any integrated accessibility evaluation methodology. In our view, there are a few critical aspects of methodology that have so far received insufficient attention in approaches such as UWEM. To keep this review short, we list just three of them:

  • The arbitration of judgemental errors, oversights and skill levels in (human) expert testing
  • The weighting of accessibility problems: how to prevent a situation where critical failures which could make a page completely inaccessible are drowned in many other ‘passes’
  • The quantification of checks and check results: The pass/fail dichotomy is good for machines, but seems in many cases too coarse to do justice to problems found on actual pages ‘out there’.

We hope that this review may be the first step to a lively exchange of experiences across European testing schemes. Pooling our experiences with the approaches that we have developed and applied over the years may eventually lead to an improved common methodology that was the promise of UWEM 1.0.