«World Green Building Council Rating Tools Task Group: QUALITY ASSURANCE GUIDE FOR GREEN BUILDING RATING TOOLS Version 1.0 _ 2013 /(DRAFT_01 /Sept_13) ...»
World Green Building Council Rating Tools Task Group:
Version 1.0 _ 2013 /(DRAFT_01 /Sept_13)
This guide has been developed as a part of the World Green Building Council (WGBC)
Rating Tool Task Group contribution to furthering the development and implementation
of green building rating tools globally. The guide aims to provide a basic outline for Green Building Councils, or other parties that wish to set up, or develop nationally recognised green building rating tools. The guide draws from the experience of mature rating tools represented within the task group as well as the Building and Construction Authority (BCA), Singapore, internal product development and service design process as well as ISO 9001:2008, ISO 17020:2012 and ISO 17065:2012.
The objective of the quality assurance guide is for developing green building rating tools to pay due attention to the processes involved in creating and operating a locally run and locally adapted rating tool for the country in which they are operating in. By following this guide the rating tool can take the relevant steps required to being recognised as one of the many globally established green building rating tools.
The guide splits the quality assurance measure across two generic stages, namely:
1. Developing the Green Building Rating tool
2. Implementation and operation of the Green Building Rating tool.
The development section delves into the criteria formation, stakeholder engagement and gaining the relevant industry and governmental support. The implementation and operation section refers to the management and delivery of the system to ensure there is the capacity to deliver and maintain the rating tool including ensuring the independence of those assessing the projects. These 2 stages should not be seen as linear stages. There would be a constant feedback loop and the rating tool would need to develop and evolve over time.
The guide forms a working document and would be amended based upon users’ feedback and as our collective knowledge and experience grows.
A simple checklist is provided as a summary and should serve as the basis for a quality promise to the users of the rating tool Finally, WGBC would encourage all rating tools upon establishment to consider to certify their certification process under a relevant international 3 rd party quality assurance assessment.
PART 1: DEVELOPING THE RATING TOOLThis section addresses the basic quality assurance considerations that should be considered during the development of the green rating tool. This development phase considers the drafting of the criteria (be it the first version or a revision of an existing rating tool), the stakeholder involvement of the criteria development, and the testing of the criteria. Part 2 discusses the quality assurance dealing with organisational structures, independence, transparency and the general processes of certification. Although these elements are both linked and interdependent on each other, for simplicity they have been separated in this guide.
1.1 GENERAL QUALITY ASSURANCE PROCESSESThe guide reinforces the process approach identified within ISO 9001 which should be considered when developing, implementing or improving the effectiveness of any process or product. The “Plan-Do-Check-Act” methodology is shown in figure 1 and will form the framework built upon in this guide.
Figure 1 – Plan-Do-Check-Act methodology (adapted from ISO9001:2008 p8)
Such an approach emphasises the continual feedback loops required of any quality management system to measure the performance effectiveness of the rating tool criteria and certification processes, thus being able to make the necessary improvements.
1.2 CRITERIA DEVELOPMENT The criteria of the rating tool to which the building is marked against forms the most visible aspect of the rating tool’s operation and is critical to the legitimacy of the tool.
Therefore the objectives and requirements of the rating tool need to be defined and agreed upon from the outset. There would need to be a balance between the requirements of the customer and the environmental intent of the rating tool to facilitate the successful uptake of the tool. Figure 2 outlines the approach that should be adopted in the design and development of the rating tool.
Figure 2 – Product or service design process – adapted from the Building and Construction Authority (BCA).
For the rating tool to develop effectively it must be seen to be meeting the stakeholder’s requirements in addition to the objectives of the organisation behind the tool. This means engaging the target customer(s) to ensure the rating tool will be used and thus would help shift the targeted market towards the goal of environmental sustainability. The rating tool must ensure that it follows the current regulatory requirements of the built environment of the country. In addition, due to the nature of building procurement, the relevant professional bodies would need to be engaged to provide advice on what is achievable within the market and their professional capabilities. Academic and research institutions can provide advice on various the criteria to stretch the industry as well as aiding in gathering empirical evidence for criteria robustness.
The draft criteria and processes for the rating tool should be subject to public consultations and reviews at key development stages and thus gain validation within the local context from those parties that would be using the tool as well as the target customers.
In parallel with the criteria development, there would need to be the rating tool resources, development guides, method statements and technical documents that would need to be developed. These would detail the proposed assessment protocols covering how to apply for certification, at what stage of the lifecycle does the rating tool cover, what are the assessment processes and procedures to comply with and the documentary evidences that would be required. These elements would be considered and tested through pilot projects and refined through the relevant feedback channels that should be in place.
In summary the criteria development should:
Contain functional and performance requirements and these shall be complete, unambiguous and not in conflict with each other or any statutory requirements.
Make due reference to the applicable statutory and regulatory requirements including local codes or standards of best practice.
Identify and learn from international precedents of similar rating tools or guides but must not breach intellectual property rules.
Respect other considerations that are context specific.
The questions to ask at the stage of tool development include:
What stages of development will the rating tool cover – design and construction, • or design construction and operational performance?
Will there be a validity period for the rating?
• What is the current market norm, what are the minimum regulatory standards • that apply?
Who is the target user of the tool, who will champion it and use it during the pilot • phase?
What building types will the rating tool cover?
• Is the tool focusing on the rural or urban context, or reflective of both.
• Does the country have large seasonal variations or geographic variations that are • required to be taken into account of and if so how can this be incorporated?
What is the level of capability within the industry, will training programmes be • required as a part of the tool’s implementation?
What are the opportunities to maintain and develop existing local skills and • incorporate the local vernacular within the tool?
What are the applicable international best practices that can be applied, in terms • of criteria formation, assessment processes or technical guidance?
Any unique social or economic considerations that need to be identified and • acknowledged within the tool?
1.3 ENGAGEMENT AND PILOTS With the draft criteria formed, identifying who to engage to provide detailed feedback on the criteria, the assessment methodologies and processes is important. With this feedback the tool can be developed and refined further. Those involved in the engagement and feedback should include professional bodies who represent the various interests of the construction industry. This would include, but not limited to Architects, mechanical, electrical, civil and structural Engineers, facilities and operations managers, contractors, developers and their representatives and various equipment and material suppliers. In addition, and in some situations, most critically it is vital to gain government support for the rating tool. With this support the rating tool could be used as a lever for government to draw upon for important sites as minimum requirements, or as a planning policy tool for development, as a driver for incentives, or a government commitment for its buildings to lead the industry. This level of support takes time but will drive the scheme forward. At a minimum the government must recognise the rating tool and should be engaged during the development.
Thorough testing of the rating tool through a variety of pilot projects allows the criteria to be tested and refined as well as the tool’s processes and procedures. The monitoring, measurement, inspection and tests of the criteria will identify if it meets the aims and intent that it was designed for. In addition through the use of real projects it will allow the rating tool body to calibrate the rating levels and identify if the industry and related professionals have the capability to deliver the projects to the various levels stipulated.
The pilot projects thus provide the feedback of if the tool is realistic, is it calibrated to the requirements of the industry.
The subsequent review and evaluation of the rating tool based upon the engagement sessions and pilot studies should be conducted with representatives of the stakeholders concerned with the development of the criteria.
The pilot testing provides a chance to ensure that the rating tool body has the resources and processes in place to operate the scheme. It allows the rating tool to ensure it is manageable for the launch and implementation, including building up the assessor network and required training materials.
Engagement and Pilot Testing:
Identifying the key stakeholders is critical to advance the rating tool, these should • include developers, architects, and engineers, facilities managers, building users, main contractors and suppliers.
Having government recognition and support is critical to the rating tool’s long • term success.
Pilot testing with a “champion” developer or customer will allow the rating tool • to iron out its processes, ensure there is enough capability both within the tool’s implementation body, and the industry to using the tool.
1.4 CERTIFICATION The processes of the rating tool (or certification) will be discussed in part 2 however at the criteria development stages would have to be considered in terms of the grading of projects and to ensure the availability of resources upfront for the administration of the tool in operation. This includes the issued certificate or report requirements, inspection methods, assessment procedures and the necessary information contained within the criteria guidance or technical manual.
The inspection methods and procedures should be defined clearly within the rating tool’s requirements (or criteria). These guidelines should define unambiguously the requirements of those seeking to use the rating tool of how to comply and the evidence base required to demonstrate compliance. This level of detail can take the form of a technical manual, as criteria guidance notes and worked examples.
The rating tool report or certificate that is given to the client should include as a minimum the identification of the issuing body (i.e. the name and address of the GBC in charge of the rating tool), the date the certificate was issued, the identification of the project that has been rated (the name and address of the client and project) and the signature (or indication of approval) by authorised personnel and the rating result. For more guidance refer to ISO/IEC 17020:2012(E) section 7.4.
PART 2: IMPLEMENTATION & OPERATION OF
THE RATING TOOL
2.1 THE GBC & RATING TOOL ORGANISATION This guide is designed for the situation where the Green Building Council is the administrator of the rating tool. However, globally there are many different models of rating tool administration that are de-linked from Green Building Councils. These include being administered by government entities, by non-government organisations and professional research bodies. These other modes of green rating tool administration can still follow the quality assurance guide to ensure independence, transparency and consistency however may achieve these goals in a different manner.
Figure 3: Typical model for GBC’s who administer a Green Building Rating Tool