Core Area Assessment Report Set 2 - A1: Public Speaking, B1: Math or Quantitative Science, D3: Ethics, and Set 3 - A2: Rhetoric and Composition

History of Core Assessment Effort

The Core Assessment Working Group (CAWG) was formed in 2015 by the Core Advisory Committee (CAC), a committee made up of Department Chairs who represent each Core Area, faculty representatives from the School of Management and the School of Nursing and Health Professions, as well as the Associate Dean of Academic Effectiveness. CAWG was created with the support of the College Council, for the College of Arts and Sciences (CAS) in response to CAS Dean Emeritus Marcelo Camperi’s call for an assessment of the Core Curriculum. That call was issued in response to faculty concerns surrounding the growth and coherence of the Core, and because the WASC Senior College and Universities Commission (WSCUC) requires that colleges and universities engage in regular curricular assessment in order to retain their academic accreditation. This is the first time that the USF Core Curriculum has been assessed since its inception in 2002.

In Spring 2015, the CAC, with the guidance of Associate Dean for Academic Effectiveness June Madsen Clausen, asked that a working group be created to design and implement a process for assessing the Core curriculum. CAWG was then constituted by Dean Camperi, with representation from different areas of the Core curriculum. The initial membership of CAWG included Tracy Benning (Sciences), Christine Young (Arts), Yaniv Stopnitzky (Social Sciences), and Ronald Sundstrom (Humanities, CAWG Chair). In Spring 2017, Joshua Gamson replaced Yaniv Stopnitzky; in Fall 2017, Ryan Van Meter replaced Ronald Sundstrom. In Spring 2018, Eve-Anne Doohan replaced Joshua Gamson. Thus the current membership of CAWG is Tracy Benning (Sciences, CAWG Chair), Christine Young (Arts), Ryan Van Meter (Humanities) and Eve-Anne Doohan (Social Sciences).

CAWG, with Associate Dean Clausen, created a timeline for assessing the Core curriculum (see Appendix A for Core Area Assessment Master Timeline; See Appendix B for A1, A2, B1, and D3 Assessment Process Timelines), and concurrently began to investigate and design materials to support an assessment of the Core curriculum. The group conferred with a consultant, Carol Gittens (Associate Dean, Santa Clara University). Based on Gittens’s recommendation, CAWG consolidated the 48 learning outcomes from the 11 Core Areas (A1 through F) into a simplified and more measurable set of Higher Order Learning Goals (HOLGs) corresponding to each Core Area (see Appendix C). The HOLGs were then used to design a draft rubric for each Core Area, with the goal of developing rubrics specific enough to offer a meaningful measure of student learning in relation to Core learning outcomes and general enough that they could be applied to student work products from a variety of courses and disciplines within a Core Area.

The Core Areas were divided into five sets of 2-3 Core Areas, with each set due to be assessed once during a five-year period. This assessment process was broken into five phases, with staggered start dates for the different Core Area sets. The process includes the following steps: 1) faculty in a Core area are asked to align their Core courses with the respective Core learning outcomes; 2) rubrics for each area are developed with input from faculty teaching in the relevant Core area, and assessable student work products are identified; 3) workshops are conducted both to inform faculty about the assessment process and to recruit faculty raters; 4) student work products are gathered and rated by paid faculty raters; 5) the results are interpreted by CAWG and shared with faculty and administration (see Appendix D for Core Assessment Reporting Protocol).

Process and Methodology for Core Area Assessments - Sets 2 & 3

Core Areas A1 (Public Speaking), B1 (Math or Quantitative Science), and D3 (Ethics) were originally included in Set 2. A2 (Rhetoric and Composition) is the sole Core Area assigned for assessment in Set 3 as a result of postponing the assessment of Core Area E until set 4. It was determined that it would be useful for assessment data for Sets 2 and 3 to be combined into one report because A1 and A2 are both housed in the Rhetoric and Language Department and the assessment findings for these two Core Areas are related, since many of the courses are offered by the same group of faculty, and some courses actually combine A1 and A2 learning outcomes.

To begin the assessment of these Core Areas, syllabi from all courses in each of these Core Areas (taught in Fall 2016 for A1, B1, and D3 and Spring 2017 for A2) were reviewed to check for alignment with the Core Learning Outcomes (CLOs). Alignment was defined as whether the CLOs were included on each syllabus and whether methods of assessment for each of the CLOs were included on each syllabus. The alignment check was organized by the Core Advisory Committee and headed by the Chair of each Core Area. For some Core Areas, the Core Area Chair reviewed all syllabi to check for alignment. For others, the Core Area Chair enlisted the help of the other members of the Core Area Committee (made up of Chairs and Directors of all Departments and Programs that teach in that particular Core Area). Faculty were notified via their Department Chair/Program Director if their syllabus was not in alignment, with the expectation that CLOs and methods of assessment would be included in future syllabi.

The semester following the syllabi alignment check, all full-time and part-time faculty teaching A1, A2, B1, and D3 classes were invited to attend rubric feedback sessions in their Core Area, to ensure that rubrics remained true to the intentions of the existing CLOs, would make sense to faculty raters, would reflect the language and practices of the Core Area, and when applied to student work products would provide an accurate measure of whether and to what degree the learning outcomes were achieved. The rubrics were each reviewed by faculty teaching in the Core Area during two or more rubric feedback sessions, before their final approval by the CAC in April 2017 for A1 and D3, May 2017 for B1, and November 2017 for A2 (see Appendix E for the Rating Rubrics).
 
Additionally, at the rubric feedback sessions, faculty in each Core Area helped identify what type of student work products would be available and useful for assessment: for A1, a rhetorical analysis of a speech and a video of a delivered speech were collected for each student; for A2, work products had some form of researched writing and some form of rhetorical analysis—the number of work products submitted for each student varied on how course assignments met these aims, but in most cases it was one or two; for B1, faculty submitted exams (or excerpts of exams); for D3, an argumentative paper was collected. Student work products were then randomly sampled using a stratified approach based on overall course enrollments. (See Appendix F for details on the numbers of courses, student work products, and sampled student work products).

Respective faculty in each Core Area were invited to apply to serve as faculty raters during a daylong assessment of student work products, for which they received a $250 honorarium. Eight A1 faculty participated in the rating session held on January 16, 2018. Four B1 faculty participated in the rating session held on January 17, 2018. Four D3 faculty participated in the rating session held on January 18, 2018. Eight A2 faculty participated in the rating session held on June 4, 2018. (see Appendix B for a list of participants). Rating was preceded by a calibration process, in which participants rated the same student work products and discussed any discrepancies in their application of the rubric.

A portion of the work products were also rated by a second faculty rater to check inter-rater reliability (this procedure is explained later in this report). In total, raters assessed about 23% of the submitted A1 work products, about 10% of the submitted A2 work products, about 10% of the submitted B1 work products, and about 12% of submitted D3 work products.

Assessment Results - Sets 2 & 3

The established benchmark for Core assessment is that 70% or more of students should achieve Level 3 (Meets Expectations) in each Criterion in every Core Area. For every rubric, each criterion while specific to that Core Area uses the same 4 point rating scale as follows: 1= Below Expectations, 2= Needs Improvement, 3= Meets Expectations, 4= Exceeds Expectations.

A1 Public Speaking

Results shown in Figures 1-4 reveal that a solid majority of students leave A1 courses able to evaluate the effectiveness of communication using rhetorical principles, compose oral communication, present oral communication, and apply principles of ethically and socially responsible communication to public address. Students achievement in several A1 criteria is the highest of any Core Area assessed to date.

Student performance is strong on all criteria but is strongest on Criterion 1 and lowest on Criterion 3. Specifically:

●    Criterion 1: Evaluates effectiveness of communication using rhetorical concepts and principles. Eighty-seven percent of students were rated as meeting or exceeding expectations in this area, less than 1% failed to meet expectations altogether (Figure 1, Table 1).

Criterion 1. Evaluates effectiveness of communication using rhetorical concepts and principles (bar graph). Rater score of 1 given to 1 student work product; score of 2 given to 14 student work products; score of 3 given to 79 student work products, score of 4 given to 22 student work products.

Figure 1. Rating score distribution for sampled A1 work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2: Composes oral communication. Over 80% of students were rated as meeting or exceeding expectations in this area. No students were rated as failing to meet the expectations altogether (Figure 2, Table 1).

Criterion 2. Composes oral communication (bar graph). Rater score of 2 given to 23 student work products; rater score of 3 given to 72 student work products; rater score of 4 given to 21 student work products.

Figure 2. Rating score distribution for sampled A1 work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3: Presents oral communication. Almost 70% of students were rated as meeting or exceeding expectations in this area. Twenty-eight percent were rated as needing improvement, with only 1.7% failing to meet expectations (Figure 3, Table 1).

Criterion 3. presents oral communication (bar graph). Rater score of 1 given to 2 student work products; rater score of 2 given to 33 student work products; rater score of 3 given to 70 student work products; rater score of 4 given to 11 student work products.

Figure 3. Rating score distribution for sampled A1 work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

●    Criterion 4: Applies principles of ethical and socially responsible communication to public address. Over 70% of students were rated as meeting or exceeding expectations in this area. Seventeen percent were rated as needing improvement. About 7% of students were rated as below expectations (Figure 4, Table 1).

Criterion 4. Applies principles of ethical and socially responsible communication to public address (bar graph). Rater score of 1 given to 8 student work products; rater score of 2 given to 20 student work products; rater score of 3 given to 64 student work products; rater score of 4 given to 24 student work products.Figure 4. Rating score distribution for sampled A1 work products. A score of 3 or higher indicates that a student has met Criterion 4 competency expectations.

Criteria Percentage of Students scoring 3 or above for A1 Criteria
1 87.1
2 80.2
3 69.8
4 75.9

Table 1. Percentage of students meeting expectations on A1 assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

A2 Rhetoric and Composition

Results from the A2 assessment reveal that a large majority of students leave the A2 Core achieving the desired level of Core competencies. In fact, A2 is the first Core Area to date that has achieved 70% or above competency for all criteria assessed. Figures 5-7 indicate that for each of the criteria assessed, the number of students scoring either a 3 or 4 is consistently higher than students that fail to demonstrate the criteria by a large margin. In fact, as Table 2 indicates, the percentage of students scoring 3 and higher is above 70% for all three criteria. Specifically:

●    Criterion 1. Critically analyzes linguistic and/or rhetorical strategies. Approximately, 77% of assessed work products demonstrated student ability to analyze linguistic and/or rhetorical strategies. Only 7% of student work assessed failed to demonstrate this criterion (Figure 5, Table 2).

Criteria 1. Critically analyzes linguistic and/or rhetorical strategies (bar graph). Rater score of 1 given to 10 student work products; rater score of 2 given to 22 student work products; rater score of 3 given to 76 student work products; rater score of 4 given to 30 student work products.Figure 5. Rating score distribution for sampled A2 work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2. Composes research-based arguments integrating sources appropriate to the task. Over 71% of the students scored 3 or above on Criterion 2, the lowest percentage of the 3 criteria but again, only 7% failed to demonstrate any level of competency with this criteria.

Criteria 2. Composes research-based arguments integrating sources appropriate to the task (bar graph). Rater score of 1 given to 10 student work products; rater score of 2 given to 29 student work products; rater score of 3 given to 69 student work products; rater score of 4 given to 30 student work products. Figure 6. Rating score distribution for sampled A2 work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3. Produces professional and/or academic writing. Finally, over 76% of student work products sampled for assessment for this criterion were scored 3 or above and an even lower percentage of work products than the first two criteria were scored as failing to demonstrate any competency for Criterion 3 at just a little over 2%.

Criteria 3. Produces professional and/or academic writing (bar graph). Rater score of 1 given to 3 student work products; rater score of 2 given to 30 student work products; rater score of 3 given to 82 student work products; rater score of 4 given to 23 student work products.

 Figure 7. Rating score distribution for sampled A2 work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

Criteria Percentage of Students Scoring 3 or Above for A2 Criteria
1 76.8
2 71.7
3 76.1

Table 2. Percentage of students meeting expectations on A2 assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

B1 Math or Quantitative Science

Results for Core Area B1 reveal that student performance is strongest in Criteria 1 with a high percentage of students able to design a mathematical solution (Figure 8, Table 3). In addition, about two-thirds of students are able to implement, identify and correct problems with the design as well as critically evaluate the solution and explain its relevance to the original problem (Figures 9 & 10, Table 3). Specifically:

●    Criterion 1: Design a mathematical solution. Almost 86% of Core B1 students are able to design a mathematical solution upon completion of a B1 Core class (Table 3). In addition, only 5% of students were unable to demonstrate this criterion.

Criterion 1. Design a mathematical solution (bar graph). Rater score of 1 given to 6 student work products; rater score of 2 given to 12 student work products; rater score of 3 given to 59 student work products; rater score of 4 given to 48 student work products.Figure 8. Rating score distribution for sampled B1 work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2: Implement the design or identify and correct problems with the design. Approximately 64% of the assessed student work products were scored 3 or above for this criterion, just below the desired benchmark of 70% student competency. Only 7.2% of student work products failed to demonstrate this criterion.

Criterion 2. Implement the design or identify and correct problems with the design (bar graph). Rater score of 1 given to 9 student work products; rater score of 2 given to 35 student work products; rater score of 3 given to 51 student work products; rater score of 4 given to 30 student work products. Figure 9. Rating score distribution for sampled B1 work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3: Critically evaluate a solution and its relevance to the original problem. Again, approximately 65% of student work products assessed scored a 3 or above, just below the benchmark of 70% (Table 3). However, 16% (a larger portion than in Criterion 2) failed to demonstrate the ability to critically evaluate a solution and its relevance to the problem (Figure 10). This may be a result of the work products submitted or the nature of the assignment, but with such a large percentage of the work products in the “Below Expectations” category, this result warrants further investigation by B1 Core faculty.

Criterion 3. Critically evaluate a solution and its relevance to the original problem (bar graph). Rater score of 1 given to 20 student work products; rater score of 2 given to 24 student work products; rater score of 3 given to 46 student work products; rater score of 4 given to 35 student work products. Figure 10. Rating score distribution for sampled B1 work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

Criteria Percentage of Students Scoring 3 or Above for B1 Criteria
1 85.6
2 64.8
3 64.8

Table 3. Percentage of students meeting expectations on B1 assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

D3 Ethics

Core Area D3 has the largest number of criteria for any Core Area assessed thus far and the results are variable. None of the criteria reached the benchmark of 70% for the number of students scoring 3 or above (Figures 11-15, Table 4). Specifically:

●    Criterion 1. Identifies key ethical theories, concepts or issues. Student work products for this criterion had slightly more students scoring below the competency benchmark of 3 or above than at or above the benchmark (Figure 11). Only 45% of assessed work products scored 3 or above with 15.5% of student work products failing to demonstrate this criterion (Figure 11, Table 4).

Criterion 1. Identifies key ethical theories, concepts, or issues (bar graph). Rater score of 1 given to 13 student work products; rater score of 2 given to 33 student work products; rater score of 3 given to 33 student work products; rater score of 4 given to 5 student work products.Figure 11. Rating score distribution for sampled D3 work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2. Explains significance of concepts, theories or issues, and their interrelations. Results from this criterion indicate that 28.6% of assessed student work scored a 3 or above during the rating process. In addition, almost 30% of assessed student work failed to demonstrate this criterion at all. There could be several explanations for the poor results obtained for this criterion (see D3 reflection for details) However, we do not believe that interrater reliability was a factor as it was at 85% which is high for that number of criterion (see interrater reliability section below, Table 8). In short, there was good agreement among faculty raters about the scores assigned to the D3 products so others factors are considered in the D3 reflections section below.

Criterion 2. Explains significance of theories, concepts, or issues and their inter-relations (bar graph). Rater score of 1 given to 25 student work products; rater score of 2 given to 35 student work products; rater score of 3 given to 21 student work products; rater score of 4 given to 3 student work products.Figure 12. Rating score distribution for sampled D3 work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3. Critically assesses theories, concepts, issues or positions within their appropriate disciplinary context. Only 15.5% of student work products scored a 3 or higher for this criterion. When combined with student work products assessed as a 2, the combined percentage of work products is 84.5%. There could be several explanations for the poor results obtained for this criterion (see D3 reflection for details) but interrater reliability was high at 85% (see interrater reliability section below, Table 8). There was general agreement among faculty raters about the scores assigned to the D3 products.

Criterion 3. Critically assesses theories, concepts, issues, or positions within their appropriate disciplinary context (bar graph). Rater score of 1 given to 39 student work products; rater score of 2 given to 32 student work products; rater score of 3 given to 10 student work products; rater score of 4 given to 3 student work products. Figure 13. Rating score distribution for sampled D3 work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

●    Criterion 4. Articulates content analysis, interpretation, and evaluation using ethical theories. Almost 60% of student work products assessed for this criterion achieved a score of 3 or better. Only 3.6% of student work products failed to demonstrate this criterion. This criterion by far demonstrates the highest level of student competency in the D3 Core Area

Criterion 4. Articulates content analysis, interpretation, and evaluation using ethical theories (bar graph). Rater score of 1 given to 3 student work products; rater score of 2 given to 31 student work products; rater score of 3 given to 47 student work products; rater score of 4 given to 3 student work products.Figure 14. Rating score distribution for sampled D3 work products. A score of 3 or higher indicates that a student has met Criterion 4 competency expectations.

●    Criterion 5. Applies content to self or the world, considering multiple perspectives (e.g. comparative, historical, methodological) and why they matter. Results for the final D3 criterion indicate that 44% of student works products achieved a score of 3 or better while only approximately 5% failed to demonstrate this criterion. Approximately 51% of student products work received a score of 2 indicating “Needs Improvement” for this criterion. This represents the largest group of scored work products for this criterion. While interrater reliability was high for this criterion at 90% (see interrater reliability section below, Table 8), so raters were generally in agreement. It is somewhat encouraging that this scoring group is large in comparison to the “Below Expectations” group.

Criterion 5. Applies content to self or the world, considering multiple perspectives (e.g. comparative, historical, methodological) and why they matter (bar graph). Rater score of 1 given to 4 student work products; rater score of 2 given to 43 student work products; rater score of 3 given to 34 student work products; rater score of 4 given to 3 student work products.Figure 15. Rating score distribution for sampled D3 work products. A score of 3 or higher indicates that a student has met Criterion 5 competency expectations.

Criteria Percentage of Students Scoring 3 or Above for D3 Criteria
1 45.2
2 28.6
3 15.5
4 59.5
5 44.0

Table 4. Percentage of students meeting expectations on D3 assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

Reflections on Assessment Results

A1 Public Speaking

According to the sample rated, the Core A1 curriculum and instruction is very successful in teaching students to evaluate the effectiveness of communication using rhetorical concepts and principles, compose oral communication, present oral communication, and apply principles of ethical and socially responsible communication to public address. Overall, very few students were rated as failing to meet expectations on any of the four criteria evaluated.

Of the criteria, students were rated highest on Criterion 1, evaluating the effectiveness of communication using rhetorical concepts and principles, which indicates a high skill level in the ability to critique communication. Students were rated lowest on Criterion 3, presenting oral communication. A few of the faculty raters commented on the quality of the speeches, indicating that they were surprised by how much students read their speeches or relied on note cards. This may indicate that more attention could be given to practicing extemporaneous speaking.

Faculty raters also indicated that Criterion 4 (applies principles of ethical and socially responsible communication to public address) was more difficult to rate than the other criteria. Students were still rated as achieving a fairly high level of success of this criterion, but the experience of the faculty raters may indicate a need to review this criterion.


A2 Rhetoric and Composition

Given the sample rated, the Core A2 curriculum and instruction is quite effective in teaching students how to analyze linguistic and/or rhetorical strategies, compose research-based arguments that integrate appropriate sources and produce professional and/or academic writing. For all three criteria, more than 70% of all work products rated scored “Meets Expectations” or higher. For criteria 1 and 3, that number was above 75%. Very few student work products were rated “Below Expectations” for any criterion. In addition to the obviously effective curriculum and instruction, CAWG attributes a significant part of this Core Area’s success in achieving competency to the robust participation of A2 faculty at all phases of the assessment process—from syllabi alignment to rubric feedback to careful work product selection/submission to a large number of A2 faculty raters.

Following the A2 rating workshop, one of the faculty raters remarked that she was surprised there were so many student work products she could rate “meets expectations” but not many to rate as “exceeds expectations.” Looking at the data, for criteria 1 and 2, the numbers of students whose products were rated “Exceeds Expectations” were greater than both the numbers for “Needs Improvement” and “Below Expectations.” It seems then that A2 faculty are well-positioned to fine-tune their courses to give students more opportunities to “exceed expectations” in the future.

Relative to the other scores of the other criteria, Criterion 2 scored lower. While it still had impressive numbers of student work products at “Meets Expectations” or higher, the skill tested for in criterion 2 gives A2 faculty the greatest room for improvement. In the written feedback following the rating
 
workshop, one rater noted that A2 faculty seemed to define “argument” and “research” quite broadly in the submitted work products; this rater appreciated that the rubric could accommodate a diversity of products. It’s possible the relatively lower scores for this skill could be attributed to a wider scope for the criterion on the rubric.

During rubric feedback sessions with A2 faculty, it was also noted that the CLOs for the area do not include anything related to applying knowledge in the content area to self, community or world. Some in the A2 faculty identified this absence as a possible revision priority for the future.


B1 Math or Quantitative Science

The Core B1 curriculum and instruction as assessed is highly successful in teaching students to design a mathematical solution with over 85% of students achieving competency for this criteria. However, for the other two criteria, competency levels were lower at 65%. While this is close to the desired level of student competency at 70%, in many ways, the assessment of these criteria is more difficult because of the variety of student work products, some of which were not appropriate for these specific criteria. Faculty raters oftentimes had to be creative and lenient when scoring products especially with respect to criterion 2, but oftentimes 3 as well. For example, it was difficult to assess Criterion 2 (implement the design or identify and correct problems with the design) when the work product was an exam that only required solutions to mathematical problems without detail or explanation.

In general, the B1 criteria are very straightforward and consistent with the original Core Learning Outcomes (CLOs), however, it is unclear how many B1 Core instructors examined the rubric ahead of time in order to submit an appropriate student work product. Faculty participation throughout rubric development and the subsequent assessment was low. Greater engagement of B1 faculty in the assessment process is necessary to both improve the rubric for the next time around and to help identify student work product(s) that would be appropriate for assessment from the various disciplines in the B1 Core.

Consequently, while results ranged from a very high level of demonstrated competence to slightly below the benchmark, it is our belief that greater faculty engagement in the process would result in an even higher level of demonstrated student competence for all 3 criteria.


D3 Ethics

The Core D3 curriculum and instruction is most effective in teaching students how to articulate their analyses using ethical theories, with almost 60% of work products scored “Meets Expectations” or higher. Results in the other four criteria were more dispersed. Slightly less than half of student work products received competency ratings for Criterion 1 (identifies key ethical theories, concepts, or issues) and Criterion 5 (applies content to self or the world, considering multiple perspectives and why they matter.) Less than a third of student work products met or exceeded expectations for Criterion 2 (explains significance of theories, concepts, or issues, and their inter-relations), and work products were weakest for Criterion 3 (critically assesses theories, concepts, issues or positions within their appropriate disciplinary context) with only 15.5% scored as competent.
 
There are multiple potential explanations for the diversity of scores across the five D3 criteria. First, the D3 rubric development process had more limited faculty participation than other Core Areas. Evaluating student work using the five criteria in the D3 rubric appeared to be more difficult than evaluating student work using rubrics with fewer more condensed criteria. Several faculty raters commented that Criterion 4, in particular, was too complex to track effectively. Thus, the D3 rubric might benefit from further development. Next, the diverse range of scores might indicate that some criteria (and the D3 learning outcomes they are based on) reflect expectations that are more appropriate for major courses than for Core courses. It is possible that a minority of student work products were rated as competent in some areas because some criteria are too difficult to achieve in a Core course.

It should be noted that for Criterion 3, almost 40% of work products received the lowest score, suggesting that the selected student work products were the result of assignments that did not actually address certain elements of the D3 learning outcomes. This might reflect the fact that D3 courses are not focusing on certain components of D3 learning outcomes in creating syllabi and assigning materials, or it might instead point to the fact that most D3 courses don’t have a single assignment that addresses all D3 learning outcomes. Providing D3 faculty with additional support in aligning course assignments with D3 learning outcomes might yield better results in future rating sessions.

Inter-Rater Reliability Analysis for Set 2 Raters

Inter-rater reliability is a numerical estimate that measures the degree of agreement among raters when assessing the same work product. Inter-rater reliability was examined to ensure that the assessment process was both accurate and consistent. We used a basic two rater model to calculate percent agreement with both exact and adjacent ratings scored as agreement. Out of the 116 work products rated for A1, a sample of 23 work products (or 19.8 %) was rated twice; out of 138 total work products rated for A2, a sample of 31 work products (or 22.5%) was rated twice; out of 125 total work products rated for B1, a sample of 24 work products (or 19.2%) was rated twice; and out of 84 total work products rated for D3, a sample of 20 work products (or 23.8%) was rated twice.

Rule of thumb benchmarks for rater data containing 4 or fewer categories (as in the case of A1, A2, and B1 datasets) is that a percent agreement of 90% (or higher) constitutes a high agreement, and 75% is considered minimal agreement. When there are 5 categories (criteria) as in the case with the D3 dataset, 75% is considered high agreement (as well as minimal agreement) as long as 90% of the data is identical or adjacent in scoring. Using these benchmarks, rater agreement on all A1 criteria surpassed the minimal standard, with Criteria 1 and 3 having the highest percent agreement at 100%, closely followed by Criterion 2 at 95.7% (Table 5). Criterion 4 had the lowest percent agreement with 87%, still well above the minimal benchmark.
 

Criteria A1 Rater Percent Agreement
1 100.0
2 95.7
3 100.0
4 87.0

Table 5. Inter-rater reliability for A1 criteria based on a two rater method of agreement.

Percent rater agreement for A2 criteria ranged from 87.1% to 93.5%, resulting in a high agreement for 2 out of 3 criteria (Table 6). Criterion 1 had the lowest percent agreement at 87.1% but was still well above the minimal agreement benchmark.

Criteria A2 Rater Percent Agreement
1 87.1
2 93.5
3 93.5

Table 6. Inter-rater reliability for A2 criteria based on a two rater method of agreement.

Interrater reliability for all B1 criteria was above the high agreement benchmark of 90% (Table 7), with two criteria achieving 100% agreement. A straightforward, easy to apply rubric and thorough calibration process most likely contributed to these results.

Criteria B1 Rater Percent Agreement
1 91.7
2 100.0
3 100.0

Table 7. Inter-rater reliability for B1 criteria based on a two rater method of agreement.

Interrater reliability was most variable for the D3 Core Area (Table 8). The D3 rubric contained 5 criteria, which was the most of this set, thus the high agreement benchmark is lower to account for the increase in rubric complexity. However, results indicate a high percent agreement for all criteria despite the increase in the number of criteria. Criteria 1 and 2 had the lowest percent agreement at 85% while Criterion 4 had the highest percent agreement at 100%.

For all of the assessed Core Areas, we concluded the calibration process was sufficient to train the raters and that results presented fall well within an acceptable reliability range.

Criteria D3 Rater Percent Agreement
1 85.0
2 85.0
3 90.0
4 100.0
5 90.0

Table 8. Inter-rater reliability for D3 criteria based on a two rater method of agreement.

Next Steps for Core Assessment Reports

Following the release of the Core assessment report and any department-specific data, departments and programs will be required to offer their interpretations of the results and to specifically evaluate the Core Learning Outcomes (CLOs) as they apply to courses they teach. Departments and programs will be required to provide feedback on the current set of CLOs for their Core Area and comment on both strategies to address deficiencies that were identified in the assessment process and whether CLOs should be modified as a result of the assessment. Potential outcomes at the department or program level include but are not limited to reporting: 1) modification of current CLOs are necessary, 2) identification of more appropriate student work products for the assessment process, 3) suggesting modifications to the rubric used for the assessment and finally, 4) identifying changes to specific Core courses to better align with CLOs. This information will be collected in a simple Google form designed to capture faculty sentiment at the department or program level after discussion of the assessment results. Once submitted, this form will be sent to the relevant Core Area Chair, the Core Advisory Committee (CAC) co-chairs and the Associate Dean of Academic Effectiveness. A timeline for reporting feedback will be established to align with other required assessment activities and reports for the College. Information from these reports will be used to inform Core Area Chairs and the CAC on the state of the Core curriculum and should provide the data necessary to guide any subsequent changes to the Core curriculum.

Appendix A: Core Area Assessment Master Timeline

Core Assessment Timeline

Appendix B: a1, a2, B1 and D2 Assessment Process Timeline

FT = Full-time, PT = Part-time, PHP = Preferred Hiring Pool, ADAE = Associate Dean of Academic Effectiveness

  • 3/23/17 CAWG Meeting – D3 Rubric Review - Participating faculty: Rebecca Gordon (PT PHP), Aysha Hidaytullah (FT), Tsering Wangchuk (FT)
  • 4/6/17 CAWG Meeting – A1 Rubric Review - Participating faculty: Ted Matula (FT), Michelle Lavigne (FT), Jacquelyn Horton (FT), Michael Rozendal (FT)
  • 4/6/17 CAWG Meeting – D3 Rubric Review - Participating faculty: Marvin Brown (PT), Greig Mulberry (PT)
  • 4/11/17 D3 Rubric approved at CAC meeting
  • 4/20/17 CAWG Meeting – A1 Rubric Review - Participating faculty: Deborah Callister (PT), Julie Sullivan (FT), Michelle Lavigne (FT), Stephanie Hunt (PT), Peter Novak (FT), Michael Rozendal (FT), Patrick McDonnell (FT)
  • 4/20/17 CAWG Meeting – B1 Rubric Review - Participating faculty: Cornelia Van Cott (FT), Amalia Kokkinaki (FT), Dave Wolber (FT)
  • 4/25/17 A1 Rubric approved at CAC meeting
  • 5/4/17 CAWG Meeting – B1 Rubric Review - Participating faculty: Michael Bloch (FT), Mouwafac Sidaoui (FT), Jack Lendvay (FT)
  • 5/9/17 B1 Rubric approved at CAC meeting
  • 10/26/17 A2 Rubric Review - Participating faculty: David Holler (FT), Cathy Gabor (FT), Patrick McDonnell (PT), Leslie Dennen (FT), Mark Merritt (FT), Phil Hanson (FT), Rick Roberts (PT), Sabrina Nelson (PT)
  • 10/27/17 CAWG Meeting – A2 Rubric Review
  • 10/3 & 10/17 Rating Workshop Orientation for A1, B1, and D3 -- led by CAWG
  • 11/17/17 Faculty Rater Applications Due 11/29/17
  • 11/29/17 A2 Rubric approved at CAC meeting
  • 1/16/18 A1 Rating Workshop - Led by CAWG. Participating faculty: Roberta D'Alois (PT), Leslie Dennen (FT), Jacquelyn Horton (FT), Tom Lugo (FT), Marc Martin (PT), Ted Matula (FT), Leigh Meredith (FT), Brian Vannice (PT)
  • 1/17/18 B1 Rating Workshop - Led by CAWG. Participating faculty: Alark Joshi (FT), Christian Kaegi (PT), Amalia Kokkinaki (FT), Stephen Yeung (FT)
  • 1/18/18 D3 Rating Workshop - Led by CAWG. Participating faculty: Rebecca Gordon (PT PHP), Greig Mulberry (PT PHP), Marjolein Oele (FT Tenured), Amanda Parris (PT PHP)
  • 2/14/18 CAWG attends Rhetoric and Language Department meeting to inform about Rating Workshop
  • 6/4/18 A2 Rating Workshop - Led by CAWG. Participating faculty: Regina Arnold (PT), Deborah Callister (PT), Leslie Dennen (FT), Chriss Warren Foster (PT), Cathy Gabor (FT), Nicole Gonzales Howell (FT), Tika Lamsal (FT), Julie Sullivan (FT)

Appendix C: Higher Order Learning Goals (HOLGs)

Created by CAWG to simplify the Core Area Learning Outcomes into more consistent and more measurable terms.

Note: the Higher Order Learning Goals (HOLGs) do not replace the Core Learning Outcomes (CLOs). CLOs will still be used to evaluate whether a course should receive Core designation. HOLGs will be used exclusively for assessing student learning in Core courses.

WASC ILO HOLG CLO
A1: PUBLIC SPEAKING
Written Communication
Oral Communication
Critical Thinking
ILO 1
ILO 3
Analyze, interpret, and evaluate using rhetorical concepts and principles, the effectiveness of their own and others' communication in both academic and civic contexts, and identity ethical problems in public address. A1: 4
A1: 5
Written Communication
Oral Communication
ILO 4 Compose and present well-organized speeches, and well-reasoned, appropriately supported oral arguments. A1: 1
A1: 2
A1: 3
A2: RHETORIC AND COMPOSITION
Written Communication
Oral Communication
Critical Thinking
ILO 1
ILO 3
Analyze, interpret, and evaluate linguistic and rhetorical strategies used in a variety of texts, and connect multiple texts in an argumentative essay, making comparisons and contrasts between them. A2: 1
A2: 2
Information Literacy ILO 5
ILO 6
Compose sophisticated research questions and arguments in response to those questions, conducting library research, and using academic documentation methods. A2: 3
A2: 4
A2: 5
B1: MATH
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3
ILO 6
Design and implement mathematical solutions to algebraic, algorithmic, statistical, numerical, or computational problems. B1: 1
B2: 2
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 6 Evaluate the validity of a solution and its relevance to the original problem using quantitative reasoning as the norm for decision making. B1: 3
B2: SCIENCE
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3 Demonstrate literacy in the content and principles of a scientific discipline. B2: 1
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 1
ILO 3
ILO 6
Conduct laboratory or field procedures that explore content, principles and application of scientific disciplines in a socially responsible manner. B2: 2
B2: 3
B2: 4
C1: LITERATURE
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate the historical, social, and cultural influences that inform diverse literary works. C1: 1
C1: 2
C1: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate the ideas, plural meanings, moral and social implications, and formal features of literary works. C1: 2
C1: 3
C1: 4
C2: HISTORY
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate a significant span of history over a wide geographic area, and the histories of past societies and civilizations using the values and standards of their own contexts and times. C2: 1
C2: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate how significant historical forces shape the development of societies and civilizations, and use historical thinking to consider ethical issues in the past and present. C2: 2
C2: 3
C2: 4
D1: PHILOSOPHY
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate central philosophical issues. D1: 1
D1: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate using philosophical methods primary philosophical themes and issues found in the writings of the major philosophers. D1: 3
D1: 4
D2: THEOLOGY AND RELIGIOUS STUDIES
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate the value of how religion, theology, and spirituality underlies and correlate with a broad range of human experience. D2: 1
D2: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate the similarities and differences among diverse religious traditions and their ethical and social implications. D2: 3
D3: ETHICS
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate central ethical issues concerning right and wrong; good and bad; and equality, justice, and rights. D3: 1
D3: 2
D3: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate ethical theories and values and apply them in professional and personal decision making. D3: 4
D3: 5
D3: 6
E: SOCIAL SCIENCES
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate issues regarding humans and the processes that shape their relationships, institutions, and interactions with their environments. E: 1
E: 2
E: 3
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3
ILO 6
Use qualitative or quantitative data, analysis, or theory to evaluate causal arguments in the social sciences. E: 2
E: 3
E: 4
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate social science arguments that recognize connections between the social, economic, political, and environmental spheres of human life in a socially responsible manner. E: 1
E: 5
E: 6
F: VISUAL AND PERFORMING ARTS
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate the aesthetic, historical, socio-political, and cultural influences that inform diverse art works. F: 1
F: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 6
Apply conceptual and technical skills related to an artistic discipline by engaging in creative and scholarly processes. F: 2
F: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate the ethical and socio-political significance of the content and form of artistic works and the processes used to create them. F: 1
F: 2
F: 4

Appendix D: Core Assessment Reporting Protocol

  1. In the reporting process, CAWG prepares a draft report. If Core Areas are made up of more than one department, then individual department data will be reported separately as an addendum, and only aggregated data for the Core Area will be included in the main report.
  2. The draft main report and addenda reports will be shared with CAC, and CAC will provide feedback to CAWG about the reports.
  3. CAWG will incorporate feedback and submit final drafts of the main report and addenda to the CAC for approval.
  4. Once approved, the CAC co-chairs will submit the final reports to the CAS Dean for approval.
  5. Once approved, the CAC co-chairs will share the main report with all Arts and Sciences faculty. However the addenda, with department-specific data, will be shared exclusively with each respective department, ideally via an in-person meeting with members of CAWG, in order to facilitate the “closing the loop” process.
  6. Departments will share a written response to the main report and appropriate addendum with the Core Area Chair, the CAC co-chairs, and the ADAE, to capture their interpretations of the findings and their action plans for completing the “closing the loop” part of the assessment process.
  7. Requests for course-specific data reporting from departments may be approved by the CAC if faculty within the department consent to its release using voting procedures outlined in their department bylaws. If any faculty member disagrees with, or has concerns about, the department consent to release course-specific reporting data, they may submit a statement of concern to the CAC along with the report of the department vote.

Developed 11/7/17 by CAWG Committee, Approved by CAC 11/29/17, Revised 2/2/18 by CAWG Revision approved by CAC 2/2/18

Appendix E: Rating Rubrics

A1. Public Speaking Higher Order Learning Goals (HOLGS)

Students will:

  1. Analyze, interpret, and evaluate the effectiveness of academic and civic communications by using rhetorical concepts and principles, and by identifying ethical problems in public address.
  2. Compose and present well-organized speeches, and well-reasoned, appropriately supported oral arguments.
A1 (Public Speaking) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
Evaluates effectiveness of communication using rhetorical concepts and principles. Evaluates effectiveness of communication with exceptional accuracy and specificity. Evaluates effectiveness of communication with acceptable accuracy and specificity. Evaluates effectiveness of communication with limited accuracy or specificity. Did not evaluate effectiveness of communication using rhetorical concepts and principles.

Composes oral communication.

Composes oral communication with exceptionally effective organization, evidence, coherence, and audience awareness. Composes oral communication with mostly effective organization, evidence, coherence, and audience awareness. Composes oral communication with partially effective organization, evidence, coherence, or audience awareness.

Did not compose oral communication with effective organization, evidence, coherence, or audience awareness.

Presents oral communication. Presents oral communication with exceptionally effective delivery and audience-centered, extemporaneous approach. Presents oral communication with mostly effective delivery and audience-centered, extemporaneous approach. Presents oral communication with partially effective delivery or partially effective audience- centered, extemporaneous approach. Did not present oral communication with effective delivery or audience-centered, extemporaneous approach.
Applies principles
of ethical and
socially
responsible
communication
to public
address.
Applies principles of ethical and socially responsible communication to public address with exceptional insight (i.e., depth of analysis, nuance, or originality).
 
Applies principles of ethical and socially responsible communication to public address with acceptable insight.
 
Applies principles of ethical and socially
responsible communication to public address with limited insight.
 
Did not apply principles of ethical and socially
responsible communication to public address.
 

Developed by CAWG Committee - October 2016. Approved by CAC - December 2016.
Edited - March 2017.

A1. Public Speaking Core Learning Outcomes (CLOs)

Students will:

  1. Craft and present well organized, thesis-driven speeches. (Criteria 2 and Criteria 3)
  2. Present well-reasoned and appropriately supported oral arguments that are responsive to topic, purpose, audience, and occasion. (Criteria 2 and Criteria 3)
  3. Deliver speeches using an audience-centered, extemporaneous approach. (Criteria 3)
  4. Use rhetorical concepts and principles to evaluate the effectiveness of their own and others' communication in both academic and civic contexts. (Criteria 1)
  5. Use rhetorical concepts and principles to practice ethical and socially responsible public speaking, and to identify and evaluate ethical problems in public address. (Criteria 4)

A2. Rhetoric and Composition Higher Order Learning Goals (HOLGS)

Students will:

  1. Analyze, interpret, and evaluate linguistic and rhetorical strategies used in a variety of texts, and connect multiple texts in an argumentative essay, by making comparisons and contrasts between them.
  2. Compose sophisticated research questions and arguments in response to those questions, by conducting library research and using academic documentation methods.
A2 (Rhetoric and Composition) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
Critically analyzes linguistic and/or rhetorical strategies. Critically analyzes linguistic and/or rhetorical strategies with exceptional understanding and insight. Critically analyzes linguistic and/or rhetorical strategies with understanding and insight. Critically analyzes linguistic and/or rhetorical strategies with partial understanding and insight. Did not critically analyze linguistic and/or rhetorical strategies.
Composes research-based arguments integrating sources appropriate to the task. Composes exceptionally complex and substantive research-based arguments that integrate sources appropriate to the task. Composes complex and substantive research-based arguments that integrate sources appropriate to the task. Composes partially complex and substantive research-based arguments that integrate sources appropriate to the task. Did not compose research-based arguments that integrate sources appropriate to the task.
Produces professional and/or academic writing. Produces writing with exceptional technical skill, clarity and style, in keeping with professional and/or academic conventions. Produces writing with technical skill, clarity and style, in keeping with professional and/or academic conventions. Produces writing with partial technical skill, clarity and style, in keeping with professional and/or academic conventions. Did not produce writing with technical skill, clarity or style, in keeping with professional and/or academic conventions.

Developed by CAWG - October 2017. Approved by CAC - November 2017

A2. Rhetoric and Language Core Learning Outcomes (CLOs)

Students will develop competence in these areas:

  1. Critical analysis of academic discourse: Students critically analyze linguistic and rhetorical strategies used in long and complex texts from a variety of genres, subjects, and fields. (Criterion 1)
  2. Integrating multiple academic sources: Students incorporate multiple texts of length and complexity within a unified argumentative essay, addressing connections and differences among them. (Criterion 2)
  3. Academic research: Students develop sophisticated research questions and compose substantial arguments in response to those questions, incorporating extensive independent library research and demonstrating mastery of standard academic documentation modes. (Criterion 2)
  4. Style: Students edit their own prose to achieve a clear and mature writing style in keeping with the conventions of academic and/or professional discourse. (Criterion 3)
  5. Revision: Students develop revision strategies for extending and enriching early drafts and for producing polished advanced academic writing. (Criterion 3)

B1. Math Higher Order Learning Goals (HOLGS)

Students will:

  1. Design and implement mathematical solutions to algebraic, algorithmic, statistical, numerical, or computational problems.
  2. Evaluate the validity of a solution and its relevance to the original problem using quantitative reasoning as the norm for decision making.
B1 (Math) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
Design a mathematical solution. Designs solution and related elements with exceptional specificity and accuracy. Designs solution and related elements with appropriate specificity and accuracy. Designs a solution and related elements with limited specificity or accuracy. Did not design a solution, or designs solution with excessive errors.
Implement the design or identify and correct problems with the design. Implements design or identifies and corrects problems with design with exceptional specificity and accuracy. Implements design or identifies and corrects problems with design with appropriate specificity and accuracy. Implements design or identifies and corrects problems with design with limited specificity or accuracy. Did not implement design or identify and correct problems with design or did so with excessive errors.
Critically evaluate a solution and its relevance to the original problem. Critically evaluates a solution and its relevance using exceptional reasoned discourse. Critically evaluates solution and its relevance using appropriate reasoned discourse. Critically evaluates solution and its relevance using limited reasoned discourse. Did not critically evaluate solution and its relevance or did not use appropriate reasoned discourse.

Developed by CAWG - March 2017. Approved by CAC - May 2017

B1. Math (CLOs)

Students will be able to determine whether a problem lends itself to a mathematical solution and if so:

  1. Design a mathematical solution. (Criteria 1)
  2. Implement the design or identify and correct problems with the design. (Criteria 2)
  3. Evaluate the validity of a solution and its relevance to the original problem using reasoned discourse as the norm for decision making. (Criteria 3)

In the outcomes “mathematical” can mean one or more of “algebraic,” “algorithmic,” “statistical,” “numerical,” or “computational.”

D3. Ethics Higher Order Learning Goals (HOLGS)

Students will:

  1. Analyze, interpret, and evaluate central ethical issues concerning right and wrong; good and bad; and equality, justice, and rights.

  2. Articulate ethical theories and values and apply them in professional and personal decision making.
D3 (Ethics) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
Identifies key ethical theories, concepts, or issues. Identifies key content and related elements with exceptional specificity and accuracy. Identifies key content and related elements with acceptable specificity and accuracy. Identifies some key content and related elements with limited specificity or accuracy. Did not identify key content, or articulates content with excessive errors.
Explains significance of theories, concepts, or issues, and their inter-relations. Explains significance of content and
inter-relations with exceptional clarity and accuracy.
Explains significance of content and
inter-relations with acceptable clarity and accuracy.
Explains significance of content and
inter-relations with limited clarity or accuracy.
Did not explain significance of content and inter-relations or articulates significance with excessive errors.
Critically assesses theories, concepts, issues or positions within their appropriate disciplinary context. Critically assesses content within its appropriate disciplinary context with exceptional understanding and insight (e.g., depth of analysis, astuteness, originality). Critically assesses content within its appropriate disciplinary context with acceptable understanding and insight. Critically assesses content within its appropriate disciplinary context with limited understanding or insight. Did not critically assess content within its appropriate disciplinary context.
Articulates content analysis, interpretation, and evaluation using ethical theories. Articulates with exceptionally effective argumentation, composition, technical skill, clarity, and appropriate academic style. Articulates with mostly effective argumentation, composition, technical skill, clarity, and appropriate academic style. Articulates with partially effective argumentation, composition, technical skill, clarity, or appropriate academic style. Did not articulate with effective argumentation, composition, technical skill, clarity, or appropriate academic style.
Applies content to self or the world, considering multiple perspectives (e.g., comparative, historical, methodological) and why they matter.
 
Applies content to self or the world, considering multiple perspectives and why they matter with exceptional insight (e.g., depth of analysis, astuteness, originality).
 
Applies content to self or the world, considering multiple perspectives and why they matter with acceptable insight.
 
Applies content to self or the world, considering multiple perspectives and why they matter with limited insight.
 
Did not apply content to self or the world, considering multiple perspectives and why they matter.
 

Developed by CAWG - February 2017. Approved by CAC - April 2017

D3. Ethics

  1. Identify and articulate central ethical problems concerning equality, justice, and rights, and understand the role these play in personal and professional life. Criteria 1
  2. Compare and contrast major ethical theories, to show how actions can be determined to be just or unjust, right or wrong, or good or bad, and to demonstrate knowledge of the strengths and weaknesses of major ethical theories. Criteria 2 & 3
  3. Investigate ways of settling ethical disputes in arriving at ethical judgments. Criteria 2 & 4
  4. Think and write critically about classic and contemporary moral issues. Criteria 5
  5. Identify the contributions of diversity and recognize the challenge that it presents in resolving contemporary ethical issues. Criteria 5
  6. Demonstrate an ability to apply ethical theories and values in personal decision-making. Criteria 5

Appendix F: Expanded Methodology and Numbers for Assessment Process

In Fall 2017, the assessment time frame for A1, B1, and D3, there were a total of 1016 students enrolled in 53 sections of 6 different A1 courses, 1348 students enrolled in 48 sections of 15 different B1 courses and 914 students enrolled in 29 sections of 10 different courses for D3. In Spring 2018, there were a total of 1391 students enrolled in 77 sections of 17 different courses in A2. At the beginning of each of these semesters, a call went out to all Core faculty in these areas for student work products to be assessed. Corie Schwabenland Garcia, Academic Data and Assessment Analyst for the Office of Academic Effectiveness, was charged with collecting all of the student work products submitted by faculty. A total of 1193 student work products were submitted for A1, 1250 total products were submitted for B1, 709 products submitted for D3 and 1449 products submitted for A2. A summary of enrollment numbers and overall assessment metrics are presented in Table 1. For A1 work products, Corie then went through the submitted items and paired the submitted papers and videotaped speeches for each student resulting in 511 sets (1022 assessable products in total) of A1 work products. The same pairing procedure was used for A2 products when appropriate. Approximately one-third of the submitted work products from each Core Area were randomly selected to be available for assessment. For all work products, student names and any identifying information were redacted, and any grading that appeared on the products were removed. Out of the available pool of samples, the 8 faculty raters assessed 116 pairs of student work products or 23% of the total A1 products submitted on January 16, 2018, 125 student work products or 10% of the total number of B1 products submitted were assessed on January 17, 2018, by 4 faculty raters, 4 faculty raters assessed 84 student work products or 12% of the total number of products submitted for D3 on January 18, 2018, and 8 faculty raters assessed 138 student work products or 10% of the total work products submitted for A2 on June 4, 2018. A reliability check was performed during each rating session by having a subset of work products evaluated by two faculty raters. For A1, 19.8 % or 23 student work products were double-rated, for B1, 19.2% or 24 products were double-rated, for D3, 23.8 or 20 products were double-rated and for A2, 22.5% or 31 products were double-rated. Those ratings were then used to perform an inter-rater reliability check as detailed in the report.

Fall 2017 &
Spring
2018 
Student Enrollment Number of Work Products Submitted % Compliance Number of Assessable Work Products Revised % Compliance

A1

 1016 (2 products/student)

1193 (1726*)   69.1%   1022 (1726*)  59.2%
A2 1391
(1 or 2 products)
1449 (2516*) 55.3% 1449 (2516*) 55.3%
B1 1348 1250 92.7% 1182  87.7%
D3 914 709 (725*) 97.8% 709 (725*)   97.8%

Table 1. Core A1, A2, B1 and D3 assessment metrics for student work products and faculty compliance. Student work products were requested for every student enrolled in Core A1, A2, B1 and D3 courses.

Two work products were requested for each A1 student resulting in a potential 2032 products collected for A1. A2 Core faculty either submitted 1 or 2 products. Faculty in each Core Area were asked to report what student work product(s) would be submitted and how they would be collected on an electronic work product submission form. Based on faculty responses to the form the upper limit of product submissions was established (#s shown in parentheses and denoted by *). Percent compliance for A1, A2, and D3 was calculated by taking the actual number of products submitted divided by the expected maximum number of products as reported by submitting faculty.