Core Area Assessment Report Set 4 - C1: Literature & E: Social Sciences

History of Core Assessment Effort

The Core Assessment Working Group (CAWG) was formed in 2015 by the Core Advisory Committee (CAC), a committee made up of Department Chairs who represent each Core Area, faculty representatives from the School of Management and the School of Nursing and Health Professions, as well as the Associate Dean of Academic Effectiveness. CAWG was created with the support of the College Council, for the College of Arts and Sciences (CAS) in response to CAS Dean Emeritus Marcelo Camperi’s call for an assessment of the Core Curriculum. That call was issued in response to faculty concerns surrounding the growth and coherence of the Core, and because the WASC Senior College and Universities Commission (WSCUC) requires that colleges and universities engage in regular curricular assessment in order to retain their academic accreditation. This is the first time that the USF Core Curriculum has been assessed since its inception in 2002.

In Spring 2015, the CAC, with the guidance of Associate Dean for Academic Effectiveness June Madsen Clausen, asked that a working group be created to design and implement a process for assessing the Core curriculum. CAWG was then constituted by Dean Camperi, with representation from different areas of the Core curriculum. The initial membership of CAWG included Tracy Benning (Sciences), Christine Young (Arts), Yaniv Stopnitzky (Social Sciences), and Ronald Sundstrom (Humanities, CAWG Chair). In Spring 2017, Joshua Gamson replaced Yaniv Stopnitzky; in Fall 2017, Ryan Van Meter replaced Ronald Sundstrom. In Spring 2018, Eve-Anne Doohan replaced Joshua Gamson. Thus the current membership of CAWG is Tracy Benning (Sciences, CAWG Chair), Christine Young (Arts), Ryan Van Meter (Humanities) and Eve-Anne Doohan (Social Sciences).

CAWG, with Associate Dean Clausen, created a timeline for assessing the Core curriculum (see Appendix A for Core Area Assessment Master Timeline; See Appendix B for C1 and E Assessment Process Timelines), and concurrently began to investigate and design materials to support an assessment of the Core curriculum. The group conferred with a consultant, Carol Gittens (Associate Dean, Santa Clara University). Based on Gittens’s recommendation, CAWG consolidated the 48 learning outcomes from the 11 Core Areas (A1 through F) into a simplified and more measurable set of Higher Order Learning Goals (HOLGs) corresponding to each Core Area (see Appendix C). The HOLGs were then used to design a draft rubric for each Core Area, with the goal of developing rubrics specific enough to offer a meaningful measure of student learning in relation to Core learning outcomes and general enough that they could be applied to student work products from a variety of courses and disciplines within a Core Area.

The Core Areas were divided into five sets of 2-3 Core Areas, with each set due to be assessed once during a five-year period. This assessment process was broken into five phases, with staggered start dates for the different Core Area sets. The process includes the following steps:

  1. Faculty in each Core Area are asked to align their Core courses with the respective Core learning outcomes;
  2. Rubrics for each area are developed with input from faculty teaching in the relevant Core Area, and assessable student work products are identified;
  3. Workshops are conducted to inform faculty about the assessment process and to recruit faculty raters;
  4. Student work products are gathered and rated by paid faculty raters;
  5. The assessment results are interpreted by CAWG and shared with faculty and administration;
  6. Departments in the related Core Area reflect on the report and submit responses as described in the Reporting Protocol (see Appendix D).

CAWG has previously released reports for Sets 1 and 2. The Set 1 report (which includes Core Areas B2 and D1) was released on May 2, 2018, and the Set 2 report (which includes Core Areas A1, A2, B1, and D3) was released on December 4, 2018. Both reports are archived at the CAS Office of Academic Effectiveness website.

Process and Methodology for Core Area Assessments - Set 4

Core Areas C1 (Literature) and E (Social Sciences) are included in Set 4. This is after Core Area E assessment was postponed one semester from the original schedule (Fall 2018), with the rating session conducted during the spring 2019 semester instead of during intersession 2019.

To begin the assessment of these Core Areas, syllabi from all courses in each of these Core Areas (taught in Spring 2017 for E and Fall 2017 for C1) were reviewed to check for alignment with the Core Learning Outcomes (CLOs). Alignment was defined as whether the CLOs and methods of assessment for each of the CLOs were included on each syllabus. The alignment check was organized by the Core Advisory Committee and headed by the Chair of each Core Area. For some Core Areas, the Core Area Chair reviewed all syllabi to check for alignment. For others, the Core Area Chair enlisted the help of the other members of the Core Area Committee (made up of Chairs and Directors of all Departments and Programs that teach in that particular Core Area). Faculty were notified via their Department Chair/Program Director if their syllabus was not in alignment, with the expectation that CLOs and methods of assessment would be included in future syllabi.

The semester following the syllabi alignment check, all full-time and part-time faculty teaching C1 and E classes were invited to attend rubric feedback sessions in their Core Area, to ensure that rubrics remained true to the intentions of the existing CLOs, would make sense to faculty raters, would reflect the language and practices of the Core Area, and when applied to student work products would provide an accurate measure of whether and to what degree the learning outcomes were achieved. The rubrics were each reviewed by faculty teaching in the Core Area during two or more rubric feedback sessions, before their final approval by the CAC in May 2018 for both C1 and E (see Appendix E for the Rating Rubrics).

Additionally, at the rubric feedback sessions, faculty in each Core Area helped identify what type of student work products would be available and useful for assessment. For both C1 and E, submitted work products included a range of types: in C1, these included partial or full exams, analytical and argumentative papers and original creative writing; in E, these also included partial or full exams, analytical or argumentative papers and an ungraded exam developed specifically for this assessment. In most cases in both areas, a single work product was submitted for each student. Student work products were then randomly sampled using a stratified approach based on overall course enrollments. (See Appendix F for details on the numbers of courses, student work products, and sampled student work products).

Respective faculty in each Core Area were invited to apply to serve as faculty raters during a daylong assessment of student work products, for which they received a $250 honorarium. Five C1 faculty participated in the rating session held on January 16, 2019. Seven E faculty participated in the rating session held on March 1, 2019. (see Appendix B for a list of participants). Rating was preceded by a calibration exercise, in which participants rated the same student work products and discussed any discrepancies in their application of the rubric. At both sessions, a recalibration exercise was held after the lunch break.

A portion of the work products were also rated by a second faculty rater to check inter-rater reliability (this procedure is explained later in this report). In total, raters assessed about 13.5% of the submitted C1 work products, and about 12% of the submitted E work products.

Assessment Results

The established benchmark for Core assessment is that 70% or more of students should achieve a rating of 3 (Meets Expectations) for each criterion in every Core Area. For every rubric, each criterion while specific to that Core Area uses the same 4 point rating scale as follows: 1= Below Expectations, 2= Needs Improvement, 3= Meets Expectations, 4= Exceeds Expectations.

C1 Literature

Results shown in Figures 1-4 reveal that a solid majority of students leave C1 courses able to identify ideas or formal features of various literary works, analyze the historical, social and/or cultural influences that inform literary works, evaluate the possible plural meanings within a literary text, including social relevance or implications and articulate responses to literary text.

Student performance is strongest on Criterion 1 and lowest on Criterion 2 at 72.6% of student work products meeting the benchmark, however, all criteria are above the benchmark of 70%. This is the second Core Area (out of 8) to achieve this distinction to date. Specifically:

●    Criterion 1: Identifies ideas or formal features of various literary works . A little over 80% of student work products were rated as meeting or exceeding expectations in this area, less than 5% failed to demonstrate this criterion (Figure 1, Table 1).

Criterion 1. Identifies ideas or formal features of various literary works (bar graph). Rater score of 1 given to 6 student work products; rater score of 2 given to 18 student work products; rater score of 3 given to 72 student work products; rater score of 4 given to 28 student work products.

Figure 1. Rating score distribution for sampled C1 work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2: Analyzes the historical, social and/or cultural influences that inform literary works . As stated previously, 72.6% of student work products were rated as meeting or exceeding expectations in this area with less than 6% failing to demonstrate this criterion (Figure 2, Table 1). While this criterion had the lowest percentage of student work products meeting or exceeding the benchmark, it is still above the desired 70%.

Criterion 2. Analyzes the historical, social and/or cultural influences that inform literary works (bar graph). Rater score of 1 given to 7 student work products; rater score of 2 given to 27 student work products; rater score of 3 given to 67 student work products; rater score of 4 given to 23 student work products.

Figure 2. Rating score distribution for sampled C1 work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3: Evaluates the possible plural meanings within a literary text, including social relevance or moral implications. Seventy-five percent of student work products were rated as meeting or exceeding expectations for this criterion. Twenty percent were rated as needing improvement, with less than 5% failing to demonstrate this criterion (Figure 3, Table 1).

Criterion 3. Evaluates the possible plural meanings within a literary text, including social relevance or moral implications (bar graph). Rater score of 1 given to 6 student work products; rater score of 2 given to 25 student work products; rater score of 3 given to 69 student work products; rater score of 4 given to 24 student work products.

Figure 3. Rating score distribution for sampled C1 work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

●    Criterion 4: Articulates responses to literary texts. Again, a solid majority of student work products were rated as meeting or exceeding expectations for this criterion at 76%. About twenty-two percent were rated as needing improvement and very few, less than 2.4% of the work products sampled failed to demonstrate this criterion (Figure 4, Table 1).

Criterion 4. Articulates responses to literary texts (bar graph). Rater score of 1 given to 3 student work products; rater score of 2 given to 27 student work products; rater score of 3 given to 69 student work products; rater score of 4 given to 25 student work products.

Figure 4. Rating score distribution for sampled C1 work products. A score of 3 or higher indicates that a student has met Criterion 4 competency expectations.

Criteria Percentage of Students Scoring 3 or Above for C1 Criteria
1 80.6
2 72.6
3

75.0

4 75.8

Table 1. Percentage of students meeting expectations on C1 assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

E Social Science

Results from the Core E assessment exhibit variable levels of proficiency. The majority of students leave Core E courses achieving below the benchmark of 70%. Figures 5-8 illustrate the results for the four criteria assessed. Specifically,
 
●    Criterion 1. Identifies and explains key social science concepts and issues. This criterion had the highest portion of student work products meeting or exceeding the benchmark at 62.7%. About 9% of student work assessed failed to demonstrate this criterion (Figure 5, Table 2).

Criterion 1. Identifies and explains key social science concepts and issues (bar graph). Rater score of 1 given to 14 student work products; rater score of 2 given to 46 student work products; rater score of 3 given to 65 student work products; rater score of 4 given to 36 student work products.

Figure 5. Rating score distribution for sampled E work products. A score of 3 or higher indicates that a student has met Criterion 1 competency expectations.

●    Criterion 2. Employs evidence-based social science methods or theories to analyze concepts, issues or positions. A little over 61% of the student work products were scored 3 or above on Criterion 2, the second highest percentage of the 4 criteria. A little over 9.3% failed to demonstrate any level of competency with this criterion.

Criterion 2. Employs evidence-based social science methods or theories to analyze concepts or issues or positions (bar graph). Rater score of 1 given to 15 student work products; rater score of 2 given to 47 student work products; rater score of 3 given to 72 student work products; rater score of 4 given to 27 student work products.

Figure 6. Rating score distribution for sampled E work products. A score of 3 or higher indicates that a student has met Criterion 2 competency expectations.

●    Criterion 3. Articulates social scientific arguments, reasoning, or analysis based on evidence. Only 47.8% of student work products sampled for assessment for this criterion were scored 3 or above. This was the lowest percentage of the 4 criteria. In addition, a little over 21% of the work products failed to demonstrate this criterion; when combined with products that scored a 2 (needs improvement), a little over 52% of the work products did not meet expectations.

Criterion 3. Articulates social scientific arguments, reasoning, or analysis based on evidence (bar graph). Rater score of 1 given to 34 student work products; rater score of 2 given to 50 student work products; rater score of 3 given to 62 student work products; rater score of 4 given to 15 student work products.

Figure 7. Rating score distribution for sampled E work products. A score of 3 or higher indicates that a student has met Criterion 3 competency expectations.

●    Criterion 4. Applies social scientific content to self or the world, considering diverse perspectives and why they matter. For this criterion, 59% of the work products sampled met or exceeded the benchmark. About 14% of student work products failed to demonstrate this criterion. Overall, 40% of student work products failed to demonstrate or meet the benchmark.

Criterion 4. Applies social scientific content to self or the world, considering diverse perspectives and why they matter (bar graph). Rater score of 1 given to 22 student work products; rater score of 2 given to 44 student work products; rater score of 3 given to 57 student work products; rater score of 4 given to 38 student work products.

Figure 8. Rating score distribution for sampled E work products. A score of 3 or higher indicates that a student has met Criterion 4 competency expectations.

Criteria Percentage of Students Scoring 3 or Above for E Criteria
1 62.7
2 61.5
3 47.8
4 59.0

Table 2. Percentage of students meeting expectations on E assessed work. The percentage is based on the number of work products with a rating score of 3 or higher divided by the total number of rated products overall.

Reflections on Assessment Results

C1 Literature

Given that from the sample rated for Core C1, more than 70% of work products scored “Meets Expectations” or higher for all criteria; it is clear that the curriculum and instruction in this area are quite effective. An impressive number of students complete these courses having successfully learned to identify ideas and formal features of literary texts, analyze several forces that inform literary texts, evaluate the possible plural meanings within a literary text and articulate a response to their reading of these texts. Only a handful of work products scored our lowest rating for each criterion. Even during the debriefing following the rating workshop, it was already clear to some raters that the results in this area would be strong; one rater reported she felt confident that students graduate from the College having seriously engaged with literature.

During the same conversation, faculty raters agreed that a robust variety of texts, subjects and ideas were being explored in Core C1 courses. At the same time, raters identified the wide range of work products submitted by instructors in the area as their only challenge in rating. Partial exams and creative writing assignments stood out as posing the most difficulty in applying the rubric. Raters felt that in these products, students were not given the full opportunity to demonstrate their learning as related to the criteria on the rubric. It is possible the scores for this area could have been even higher without such challenges. For any subsequent assessment of the area, it is recommended that instructors take all criteria on a rubric into consideration when submitting student work products.

Relative to the others on the rubric, Criterion 2 offers the greatest room for improvement for Core C1. Setting aside the aforementioned issue with work product selection, it is worth mentioning that in written and spoken feedback, raters noted a lack of evidence of intensive close reading in work products as well as an over-reliance on summary on the part of students in their written responses to literary texts. Acknowledging that these observations could play only a small role in the relatively lower scores on Criterion 2, one recommendation for C1 faculty is to consider revising the CLOs to explicitly address these observations, if indeed close reading and depth of analysis are high priorities for the area.

E Social Science

Core E is unique because of the number of different departments and programs that contribute courses to this area (there are nine different subject matters included in Core E, ranging from Economics to Environmental Studies to Psychology). Faculty raters were able to successfully rate the work products from across the wide range of courses and subjects included in Core E, which was impressive. However, the results for this core area indicate that none of the four criteria achieved the benchmark of 70% of students meeting or exceeding expectations. The first criterion, identifying and explaining social scientific concepts or issues, was the most successful with 62.7% of students meeting or exceeding expectations. The second criterion, employs evidence-based social science methods or theories to analyze concepts, issues or positions, resulted in 61% of students meeting or exceeding expectations.

The lowest scores were found for criterion three, which requires students to articulate social scientific arguments, reasoning, or analysis based on evidence, where only 47.8% of students met or exceeded expectations. The final criterion of applying social scientific knowledge to the world had 59% of students meeting or exceeding expectations.

Comments from the faculty raters help to interpret these lower results. Specifically, raters commented that work products frequently did not seem to align with the rubric. This was deemed to be especially true when multiple choice exams were submitted as work products. Raters also mentioned that students seemed to overly rely on their own experiences, rather than using social scientific theories or concepts. When students did apply social scientific theories, several raters commented that the work lacked a deeper level of analysis. The application criterion (criterion 4) was viewed as especially difficult to assess for some work products, with one rater commenting that this criterion seemed overly ambitious.

The lack of alignment between work products and specific criteria on the rubric may account for some of the lower scores, in particular when students were deemed not to have demonstrated the criterion at all (which ranged from 9%-21% for Core E). A discussion about whether the CLOs are appropriate may be helpful, and if the CLOs are still deemed appropriate, then a discussion about how to better match assignments to meet the CLOs would likely be beneficial.

Inter-Rater Reliability Analysis for Set 4 Raters

Inter-rater reliability is a numerical estimate that measures the degree of agreement among raters when assessing the same work product. Inter-rater reliability was examined to ensure that the assessment process was both accurate and consistent. We used a basic two rater model to calculate percent agreement with both exact and adjacent ratings scored as agreement. Out of the 124 work products rated for C1, a sample of 29 work products (or 23.4 %) was rated twice; out of 161 total work products rated for E, a sample of 34 work products (or 21.1%) was rated twice.

Rule of thumb benchmarks for rater data containing 4 or fewer categories is that a percent agreement of 90% (or higher) constitutes a high agreement, and 75% is considered to be minimal agreement. Using these benchmarks, rater agreement on all C1 criteria surpassed the high agreement standard, with Criteria 3 having the highest percent agreement at 100%, closely followed by Criterion 1 and 2 at 96.6% (Table 5). Criterion 4 had the lowest percent agreement with 93.1%, but still above the high agreement benchmark.

Criteria C1 Rater Percent Agreement
1 96.6
2 96.6
3 100.0
4 93.1

Table 3. Inter-rater reliability for C1 criteria based on a two rater method of agreement.
 
Percent rater agreement for E criteria ranged from 85.3% to 94.1%, resulting in a high agreement for 2 out of 4 criteria (Table 6). Criterion 4 had the lowest percent agreement at 85.3% but was still well above the minimal agreement benchmark, as was the case for Criterion 2, the second lowest percent agreement at 88.2%.

Criteria E Rater Percent Agreement
1 91.2
2 88.2
3 94.1
4 85.3

Table 4. Inter-rater reliability for E criteria based on a two rater method of agreement.

Next Steps for Core Assessment Reports

Following the release of the Core assessment report and any department-specific data, departments and programs will be required to offer their interpretations of the results and to specifically evaluate the Core Learning Outcomes (CLOs) as they apply to courses they teach. Departments and programs will be required to provide feedback on the current set of CLOs for their Core Area and comment on both strategies to address deficiencies that were identified in the assessment process and whether CLOs should be modified as a result of the assessment. Potential outcomes at the department or program level include but are not limited to reporting: 1) modifications of current CLOs are necessary, 2) identification of more appropriate student work products for the assessment process, 3) suggesting modifications to the rubric used for the assessment and finally, 4) identifying changes to specific Core courses to better align with CLOs. This information will be collected in a simple Google form designed to capture faculty sentiment at the department or program level after discussion of the assessment results. Once submitted, this form will be sent to the relevant Core Area Chair, the Core Advisory Committee (CAC) co-chairs and the Associate Dean of Academic Effectiveness. A timeline for reporting feedback will be established to align with other required assessment activities and reports for the College. Information from these reports will be used to inform Core Area Chairs, and the CAC on the state of the Core curriculum and should provide the data necessary to guide any subsequent changes to the Core curriculum.

APPENDIX A: CORE AREA ASSESSMENT MASTER TIMELINE

CORE ASSESSMENT TIMELINE

Appendix B: Assessment Process Timeline

FT = Full-time, PT = Part-time, PHP = Preferred Hiring Pool

  • 11/7/17 CAWG Meeting – E Rubric Review - Participating faculty: John Higgins (PT - Media Studies)
  • 3/8/18 CAWG Meeting – E Rubric Review - Participating faculty: Alexandra Cassar (FT - Economics), Kevin Chun (FT - Psychology), Ed Munnich (FT - Psychology), Evelyn Ho (FT - Communication Studies), Eve-Anne Doohan (FT - Communication Studies), Elizabeth Katz (FT - Economics), Libo Xu (FT - Economics), Jesse Anttila-Hughes (FT - Economics), Dana Zartner (FT - International Studies)
  • 4/10/18 CAWG Meeting – C1 Rubric Review - Participating faculty: Karen Bouwer (FT - Modern and Classic Languages)
  • 4/13/18 CAWG Meeting – E Rubric Review - Participating faculty: Susannah Kaiser (FT - Media Studies), Kevin Chun (FT - Psychology); C1 Rubric Review - Participating faculty: Florentina Mocanu-Schendel (PT - Theater)
  • 4/16/18 CAWG attends English Department meeting for C1 Rubric Review
  • 5/2/18 C1 and E Rubrics approved at CAC meeting
  • 12/7/18 Faculty Rater Applications Due for C1 and E
  • 1/16/19 C1 Rating Workshop - Led by CAWG. Participating faculty: Omar Miranda (FT), Christina Garcia Lopez (FT), Ana Rojas (FT), Valerie Lo (PT), Anne Mairesse (FT)
  • 3/1/19 E Rating Workshop - Led by CAWG. Participating faculty: Jeff Paller (FT), John Higgins (PT), Anne Wenzel (PT), Jesse Anttila-Hughes (FT), David Silver (FT), Yaniv Stopnitzky (FT), Mayo Buenafe-Ze (PT)

APPENDIX C: HIGHER ORDER LEARNING GOALS (HOLGS)

Created by CAWG to simplify the Core Area Learning Outcomes into more consistent and more measurable terms.

Note: the Higher Order Learning Goals (HOLGs) do not replace the Core Learning Outcomes (CLOs). CLOs will still be used to evaluate whether a course should receive Core designation. HOLGs will be used exclusively for assessing student learning in Core courses.

WASC ILO HOLG CLO
A1: PUBLIC SPEAKING
Written Communication
Oral Communication
Critical Thinking
ILO 1
ILO 3
Analyze, interpret, and evaluate using rhetorical concepts and principles, the effectiveness of their own and others' communication in both academic and civic contexts, and identity ethical problems in public address. A1: 4
A1: 5
Written Communication
Oral Communication
ILO 4 Compose and present well-organized speeches, and well-reasoned, appropriately supported oral arguments. A1: 1
A1: 2
A1: 3
A2: RHETORIC AND COMPOSITION
Written Communication
Oral Communication
Critical Thinking
ILO 1
ILO 3
Analyze, interpret, and evaluate linguistic and rhetorical strategies used in a variety of texts, and connect multiple texts in an argumentative essay, making comparisons and contrasts between them. A2: 1
A2: 2
Information Literacy ILO 5
ILO 6
Compose sophisticated research questions and arguments in response to those questions, conducting library research, and using academic documentation methods. A2: 3
A2: 4
A2: 5
B1: MATH
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3
ILO 6
Design and implement mathematical solutions to algebraic, algorithmic, statistical, numerical, or computational problems. B1: 1
B2: 2
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 6 Evaluate the validity of a solution and its relevance to the original problem using quantitative reasoning as the norm for decision making. B1: 3
B2: SCIENCE
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3 Demonstrate literacy in the content and principles of a scientific discipline. B2: 1
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 1
ILO 3
ILO 6
Conduct laboratory or field procedures that explore content, principles and application of scientific disciplines in a socially responsible manner. B2: 2
B2: 3
B2: 4
C1: LITERATURE
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate the historical, social, and cultural influences that inform diverse literary works. C1: 1
C1: 2
C1: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate the ideas, plural meanings, moral and social implications, and formal features of literary works. C1: 2
C1: 3
C1: 4
C2: HISTORY
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate a significant span of history over a wide geographic area, and the histories of past societies and civilizations using the values and standards of their own contexts and times. C2: 1
C2: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate how significant historical forces shape the development of societies and civilizations, and use historical thinking to consider ethical issues in the past and present. C2: 2
C2: 3
C2: 4
D1: PHILOSOPHY
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate central philosophical issues. D1: 1
D1: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate using philosophical methods primary philosophical themes and issues found in the writings of the major philosophers. D1: 3
D1: 4
D2: THEOLOGY AND RELIGIOUS STUDIES
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate the value of how religion, theology, and spirituality underlies and correlate with a broad range of human experience. D2: 1
D2: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate the similarities and differences among diverse religious traditions and their ethical and social implications. D2: 3
D3: ETHICS
Critical Thinking
Information Literacy
ILO 1
ILO 2
ILO 3
Analyze, interpret, and evaluate central ethical issues concerning right and wrong; good and bad; and equality, justice, and rights. D3: 1
D3: 2
D3: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
ILO 5
Articulate ethical theories and values and apply them in professional and personal decision making. D3: 4
D3: 5
D3: 6
E: SOCIAL SCIENCES
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate issues regarding humans and the processes that shape their relationships, institutions, and interactions with their environments. E: 1
E: 2
E: 3
Critical Thinking
Quantitative Reasoning
Information Literacy
ILO 3
ILO 6
Use qualitative or quantitative data, analysis, or theory to evaluate causal arguments in the social sciences. E: 2
E: 3
E: 4
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate social science arguments that recognize connections between the social, economic, political, and environmental spheres of human life in a socially responsible manner. E: 1
E: 5
E: 6
F: VISUAL AND PERFORMING ARTS
Critical Thinking
Information Literacy
ILO 1
ILO 3
Analyze, interpret, and evaluate the aesthetic, historical, socio-political, and cultural influences that inform diverse art works. F: 1
F: 2
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 6
Apply conceptual and technical skills related to an artistic discipline by engaging in creative and scholarly processes. F: 2
F: 3
Written Communication
Oral Communication
Critical Thinking
ILO 3
ILO 4
Articulate the ethical and socio-political significance of the content and form of artistic works and the processes used to create them. F: 1
F: 2
F: 4

APPENDIX D: CORE ASSESSMENT REPORTING PROTOCOL

  1. In the reporting process, CAWG prepares a draft report. If Core Areas are made up of more than one department, then individual department data will be reported separately as an addendum, and only aggregated data for the Core Area will be included in the main report.
  2. The draft main report and addenda reports will be shared with CAC, and CAC will provide feedback to CAWG about the reports.
  3. CAWG will incorporate feedback and submit final drafts of the main report and addenda to the CAC for approval.
  4. Once approved, the CAC co-chairs will submit the final reports to the CAS Dean for approval.
  5. Once approved, the CAC co-chairs will share the main report with all Arts and Sciences faculty. However the addenda, with department-specific data, will be shared exclusively with each respective department, ideally via an in-person meeting with members of CAWG, in order to facilitate the “closing the loop” process.
  6. Departments will share a written response to the main report and appropriate addendum with the Core Area Chair, the CAC co-chairs, and the ADAE, to capture their interpretations of the findings and their action plans for completing the “closing the loop” part of the assessment process.
  7. Requests for course-specific data reporting from departments may be approved by the CAC if faculty within the department consent to its release using voting procedures outlined in their department bylaws. If any faculty member disagrees with, or has concerns about, the department consent to release course-specific reporting data, they may submit a statement of concern to the CAC along with the report of the department vote.

Developed 11/7/17 by CAWG Committee, Approved by CAC 11/29/17, Revised 2/2/18 by CAWG Revision approved by CAC 2/2/18

Appendix E: Rating Rubrics

C1. Literature Higher Order Learning Goals (HOLGs)

Students will:

  1. Analyze, interpret, and evaluate the historical, social, and cultural influences that inform diverse literary works.
  2. Articulate the ideas, plural meanings, moral and social implications, and formal features of literary works.
C1 (Literature) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
Identifies ideas or formal features of various literary works. Identifies ideas or formal features of literary works with exceptional specificity and accuracy. Identifies ideas or formal features of literary works with appropriate specificity and accuracy. Identifies ideas or formal features of literary works with limited specificity and accuracy. Did not identify ideas or formal features of literary works or did so with excessive errors.
Analyzes the historical, social and/or cultural influences that inform literary works.
 
Analyzes the historical, social and/or cultural influences that inform literary works with exceptional clarity and accuracy. Analyzes the historical, social and/or cultural influences that inform literary works with acceptable clarity and accuracy.
 
Analyzes the historical, social and/or cultural influences that inform literary works with limited clarity and accuracy. Did not analyze the historical, social and/or cultural influences that inform literary works or did so with excessive errors.
 
Evaluates the possible plural meanings within a literary text, including social relevance or moral implications.
 
Evaluates the possible plural meanings within a literary text with exceptional understanding and insight (e.g., depth of analysis, cleverness, lateral thinking, originality).
 
Evaluates the possible plural meanings within a literary text with appropriate understanding and insight. Evaluates the possible plural meanings within a literary text with limited understanding and insight. Did not evaluate the possible plural meanings
within a literary text or did so with
excessive errors.
Articulates responses to literary texts. Articulates responses to literary texts with exceptionally effective argumentation, composition, technical skill, clarity, and/or appropriate academic style. Articulates responses to literary texts with mostly effective argumentation, composition, technical skill, clarity, and/or appropriate academic style. Articulates responses to literary texts with partially effective argumentation, composition, technical skill, clarity, and/or appropriate academic style. Did not articulate responses to literary texts with effective argumentation, composition, technical skill, clarity, and/or appropriate academic style or did so with excessive errors.

Developed by CAWG Committee - March 2018, Approved by CAC May 2018

C1. Literature Core Learning Outcomes (CLOs)

  1. Demonstrate a basic understanding of the literary, historical, social, and cultural influences that inform literary works, including diversity of perspectives, experiences, and traditions. (Criterion 2)
  2. Articulate in writing and discussion their responses to literary texts (75% of which must be written texts) with a view to equipping them with the knowledge, values, and sensitivity to succeed as persons and professionals. (Criterion 4)
  3. Demonstrate a basic critical ability to identify, interpret, and evaluate the ideas and formal features of an integrated body of literary texts in the context of a socially responsible learning community of high quality scholarship and academic rigor. (Criterion 1)
  4. Show a sensitivity to the plurality of meanings within a literary text, including the moral implications of human choices. (Criterion 3)

E. Social Sciences Higher Order Learning Goals (HOLGS)

Students will:

  1. Analyze, interpret, and evaluate issues regarding humans and the processes that shape their relationships, institutions, and interactions with their environments. (Criterion A)
  2. Use qualitative or quantitative data, analysis, or theory to evaluate arguments in the social sciences. (Criterion B, C)
  3. Articulate social science arguments that recognize connections between the social, economic, political, and environmental spheres of human life in a socially responsible manner. (Criterion C, D)
E (Social Sciences) HOLG Rubric
Criteria Performance Standards
Exceeds Expectations (4) Meets Expectations (3) Needs Improvement (2) Below Expectations (1)
A. Identifies and explains key social science concepts and issues. Identifies and explains key content with exceptional specificity and accuracy. Identifies and explains key content with specificity and accuracy. Identifies and explains key content with limited specificity or accuracy. Does not identify or explain key content, or articulates content with excessive errors.
B. Employs evidence-based social science methods or theories to analyze concepts, issues or positions.
 
Employs evidence- based social science methods or theories within their appropriate context with exceptional understanding and insight (e.g., depth of
analysis, astuteness, originality).
Employs evidence-based social science methods or theories within their appropriate context with understanding and insight. Employs evidence-based social science methods or theories within their appropriate context with limited understanding and insight. Does not employ evidence-based social science methods or theories.
C. Articulates social scientific arguments, reasoning, or analysis based on evidence.
 
Articulates with exceptionally effective argumentation, composition, technical skill, clarity and appropriate academic style. Articulates with effective argumentation, composition, technical skill, clarity and appropriate academic style. Articulates with partially effective argumentation, composition, technical skill, clarity or appropriate academic style. Does not articulate with effective argumentation, composition, technical skill, clarity or appropriate academic style.
D. Applies social scientific content to self or the world, considering diverse perspectives and why they matter. Applies content to self or the world, considering diverse perspectives and why they matter with exceptional insight (e.g., depth of analysis, astuteness, originality). Applies content to self or the world, considering diverse perspectives and why they matter with insight. Applies content to self or the world, considering diverse perspectives and why they matter with limited insight. Does not apply content to self, considering diverse perspectives and why they matter.

Developed by CAWG September 2017, approved by CAC May 2018 

Area E: Social Sciences (CLOs)

Students will:

  1. Engage in the systematic and logical study of human beings and their interrelationships, with an appreciation of human diversity. (Criterion B, Criterion C)
  2. Employ one or more social science methods or social science theories and philosophies. (Criterion B)
  3. Analyze explanations of human behavior, human relations, or human institutions. (Criterion A, Criterion B)
  4. Apply social science knowledge to contemporary social problems, including ways to improve the human condition and promote justice. (Criterion B, Criterion D)
  5. Understand and demonstrate social responsibility. (Criterion D)
  6. Communicate social science knowledge to a world shared by all people and held in trust for future generations. (Criterion C, Criterion D)

Appendix F: Expanded Methodology and Numbers for Assessment Process

In Fall 2018, the assessment time frame for C1 and E, there were a total of 864 students enrolled in 34 sections of 21 different C1 courses, and 1851 students enrolled in 57 sections of 27 different E designated courses. At the beginning of the semester, a call went out to all Core faculty in these areas for student work products to be assessed. Corie Schwabenland Garcia, Academic Data and Assessment Analyst for the Office of Academic Effectiveness, was charged with collecting all of the student work products submitted by faculty. A total of 920 student work products were submitted for C1. This number is greater than the number of students enrolled in C1 courses because some faculty submitted more than 1 work product for the students in their course. Products were not submitted for 86 students taking a C1 designated course as a result of non-submission of products by two Core C1 faculty members. A total of 1386 work products were submitted for Core E, considerably less than the number of students enrolled in Core E courses for this semester. Six faculty did not respond to the submission request (-148 work products), and a number of faculty failed to submit a work product for every student in their course. The partial submissions reduced the work product pool an additional 169 samples. A summary of enrollment numbers and overall assessment metrics are presented in Table 1. For the C1 collected work products, Corie then went through the submitted items and paired the submitted work products where appropriate. The same pairing procedure was used for E products when necessary as well. Approximately one-third of the submitted work products from each Core Area were then randomly selected to be available for assessment. For all work products, student names and any identifying information were redacted, and any grading that appeared on the products was removed. Out of the available pool of samples for C1, the 5 faculty raters assessed 124 student work products or 13.5% of the total number of products submitted on January 15, 2019; 161 student work products or about 12% of the total number of E products submitted were assessed on March 1, 2019, by 7 faculty raters. A reliability check was performed during each rating session by having a subset of work products evaluated by two faculty raters. For C1, 23.4 % or 29 student work products were double-rated, and for E, 21.1% or 34 products were double-rated. Those ratings were then used to perform an inter-rater reliability check as detailed in the report.

Fall 2018 Student Enrollment Number of Work Products Submitted % Compliance Number of Assessable Work Products Revised % Compliance
C1 864 778 (920*) (sometimes 2 products/student) 84.6% 778 84.6%
E 1851 1386 (1703*) 81.4% 1386 81.4%

Table 5. Core C1 and E assessment metrics for student work products and faculty compliance. Student work products were requested for every student enrolled in Core C1 and E courses. Faculty in each Core Area were asked to report what student work product(s) would be submitted and how they would be collected on an electronic work product submission form. Based on faculty responses to the form an expected number of product submissions was established, those numbers are shown in parentheses and denoted by an asterisk (*). Percent compliance for C1 and E was calculated by taking the actual number of products submitted divided by the expected number of products as reported on the faculty submission form.