definition of evaluation by different authors

Overview of the types of information that systems need to capture and link. Introduction, what is meant by impact? Research findings will be taken up in other branches of research and developed further before socio-economic impact occurs, by which point, attribution becomes a huge challenge. 2010). It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? There is a great deal of interest in collating terms for impact and indicators of impact. Assessment for Learning is the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and. They aim to enable the instructors to determine how much the learners have understood what the teacher has taught in the class and how much they can apply the knowledge of what has been taught in the class as well. 2009; Russell Group 2009). From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines. 0000009507 00000 n In 200910, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) (REF2014 2010). In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. In this case, a specific definition may be required, for example, in the Research Excellence Framework (REF), Assessment framework and guidance on submissions (REF2014 2011b), which defines impact as, an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances. A comparative analysis of these definitions reveal that in defining performance appraisal they were saying the same thing, but in a slightly modified way. The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. 2010; Hanney and Gonzlez-Block 2011) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed (Hanney and Gonzalez Block 2011). 0000342958 00000 n 0000007777 00000 n As part of this review, we aim to explore the following questions: What are the reasons behind trying to understand and evaluate research impact? Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. Narratives can be used to describe impact; the use of narratives enables a story to be told and the impact to be placed in context and can make good use of qualitative information. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. 0000011201 00000 n The current definition of health, formulated by the WHO, is no longer adequate for dealing with the new challenges in health care systems. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. 2005). Providing advice and guidance within specific disciplines is undoubtedly helpful. A taxonomy of impact categories was then produced onto which impact could be mapped. There is . 0000011585 00000 n Wigley (1988, p 21) defines it as "a data reduction process that involves the . Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today. This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain (Mugabushaka and Papazoglou 2012). Muffat says - "Evaluation is a continuous process and is concerned with than the formal academic achievement of pupils. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that one is able to see further by standing on the shoulders of giants. Two areas of research impact health and biomedical sciences and the social sciences have received particular attention in the literature by comparison with, for example, the arts. (2007:11-12), describes and explains the different types of value claim. A Review of International Practice, HM Treasury, Department for Education and Skills, Department of Trade and Industry, Yes, Research can Inform Health Policy; But can we Bridge the Do-Knowing its been Done Gap?, Council for Industry and Higher Education, UK Innovation Research Centre. It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary (Davies et al. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities. 1. It is desirable that the assignation of administrative tasks to researchers is limited, and therefore, to assist the tracking and collating of impact data, systems are being developed involving numerous projects and developments internationally, including Star Metrics in the USA, the ERC (European Research Council) Research Information System, and Lattes in Brazil (Lane 2010; Mugabushaka and Papazoglou 2012). In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at Kings College London or outlined in relevant publications (MICE Project n.d.). Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed. The time lag between research and impact varies enormously. To demonstrate to government, stakeholders, and the wider public the value of research. The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. As Donovan (2011) comments, Impact is a strong weapon for making an evidence based case to governments for enhanced research support. In viewing impact evaluations it is important to consider not only who has evaluated the work but the purpose of the evaluation to determine the limits and relevance of an assessment exercise. In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole. Understanding what impact looks like across the various strands of research and the variety of indicators and proxies used to evidence impact will be important to developing a meaningful assessment. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Muffat says - "Evaluation is a continuous process and is concerned with than the formal academic achievement of pupils. Figure 2 demonstrates the information that systems will need to capture and link. This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. Perhaps the most extended definition of evaluation has been supplied by C.E.Beeby (1977). The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. It is therefore in an institutions interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. The process of evaluation is dynamic and ongoing. The Value of Public Sector R&D, Assessing impacts of higher education systems, National Co-ordinating Centre for Public Engagement, Through a Glass, Darkly: Measuring the Social Value of Universities, Describing the Impact of Health Research: A Research Impact Framework, LSE Public Policy Group. However, there has been recognition that this time window may be insufficient in some instances, with architecture being granted an additional 5-year period (REF2014 2012); why only architecture has been granted this dispensation is not clear, when similar cases could be made for medicine, physics, or even English literature. Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study unclassified. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? Its objective is to evaluate programs, improve program effectiveness, and influence programming decisions. Test, measurement, and evaluation are concepts used in education to explain how the progress of learning and the final learning outcomes of students are assessed. Evidence of academic impact may be derived through various bibliometric methods, one example of which is the H index, which has incorporated factors such as the number of publications and citations. Case studies are ideal for showcasing impact, but should they be used to critically evaluate impact? A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006). Author: HPER Created Date: 3/2/2007 10:12:16 AM . There has been a drive from the UK government through Higher Education Funding Council for England (HEFCE) and the Research Councils (HM Treasury 2004) to account for the spending of public money by demonstrating the value of research to tax payers, voters, and the public in terms of socio-economic benefits (European Science Foundation 2009), in effect, justifying this expenditure (Davies Nutley, and Walter 2005; Hanney and Gonzlez-Block 2011). The inherent technical disparities between the two different software packages and the adjustment . , . However, the . (2006) on the impact arising from health research. Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. Measurement assessment and evaluation also enables educators to measure the skills, knowledge, beliefs, and attitude of the learners. These traditional bibliometric techniques can be regarded as giving only a partial picture of full impact (Bornmann and Marx 2013) with no link to causality. Evaluation is a procedure that reviews a program critically. Definition of Evaluation by Different Authors Tuckman: Evaluation is a process wherein the parts, processes, or outcomes of a programme are examined to see whether they are satisfactory, particularly with reference to the stated objectives of the programme our own expectations, or our own standards of excellence. Reviewing the research literature means finding, reading, and summarizing the published research relevant to your question. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact (Russell Group 2009). The term comes from the French word 'valuer', meaning "to find the value of". Accountability. 2008), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. Concerns over how to attribute impacts have been raised many times (The Allen Consulting Group 2005; Duryea et al. In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). Evaluative research is a type of research used to evaluate a product or concept, and collect data to help improve your solution. In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research (Davies et al. The term "assessment" may be defined in multiple ways by different individuals or institutions, perhaps with different goals. 0000008591 00000 n Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that inclusion of impact in the REF will create selection pressure, promoting academic research that has more direct economic impact or which is easier to explain to the public (Corbyn 2009). For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. While defining the terminology used to understand impact and indicators will enable comparable data to be stored and shared between organizations, we would recommend that any categorization of impacts be flexible such that impacts arising from non-standard routes can be placed. In this sense, when reading an opinion piece, you must decide if you agree or disagree with the writer by making an informed judgment. 10312. Time, attribution, impact. 0000342798 00000 n Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts. In the educational context, the . 3. Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. 0000007967 00000 n This presents particular difficulties in research disciplines conducting basic research, such as pure mathematics, where the impact of research is unlikely to be foreseen. Assessment refers to a related series of measures used to determine a complex attribute of an individual or group of individuals. What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare? The Goldsmith report (Cooke and Nadim 2011) recommended making indicators value free, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached. 2007). These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. Dennis Atsu Dake. For more extensive reviews of the Payback Framework, see Davies et al. Hb```f``e`c`Tgf@ aV(G Ldw0p)}c4Amff0`U.q$*6mS,T",?*+DutQZ&vO T4]2rBWrL.7bs/lcx&-SbiDEQ&. 5. 0000004019 00000 n It is a process that involves careful gathering and evaluating of data on the actions, features, and consequences of a program. Assessment refers to the process of collecting information that reflects the performance of a student, school, classroom, or an academic system based on a set of standards, learning criteria, or curricula. 2007; Nason et al. The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review (Grant 2006). evaluation practice and systems that go beyond the criteria and their definitions. In the UK, evaluation of academic and broader socio-economic impact takes place separately. 0000010499 00000 n What are the challenges associated with understanding and evaluating research impact? The Payback Framework systematically links research with the associated benefits (Scoble et al. Even where we can evidence changes and benefits linked to our research, understanding the causal relationship may be difficult. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. What are the challenges associated with understanding and evaluating research impact? This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society (University and College Union 2011). Why should this be the case? Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. This work was supported by Jisc [DIINN10]. The REF will therefore assess three aspects of research: Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment (REF2014 2011a). Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side (Scoble et al. 0000001862 00000 n Key features of the adapted criteria . The book also explores how different aspects of citizenship, such as attitudes towards diverse population groups and concerns for social issues, relate to classical definitions of norm-based citizenship from the political sciences. The introduction of impact assessments with the requirement to collate evidence retrospectively poses difficulties because evidence, measurements, and baselines have, in many cases, not been collected and may no longer be available. The verb evaluate means to form an idea of something or to give a judgment about something. 0000334705 00000 n The University and College Union (University and College Union 2011) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. In the UK, UK Department for Business, Innovation, and Skills provided funding of 150 million for knowledge exchange in 201112 to help universities and colleges support the economic recovery and growth, and contribute to wider society (Department for Business, Innovation and Skills 2012). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. 0000001883 00000 n To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. Cb)5. While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The origin is from the Latin term 'valere' meaning "be strong, be well; be of value, or be worth". What is The Concept of Evaluation With its Importance?

Duffy Funeral Home Lavale, Md, Articles D