Banner
Nicole MartinRogers, Sera Kinoglu, and Anna Granias pose wearing conference badges.
Title

Key Takeaways from the Culturally Responsive Evaluation and Assessment Conference

Body

Three of us from Wilder Research (Nicole MartinRogers, Sera Kinoglu, Anna Granias) and about 300 of our culturally responsive evaluation colleagues from around the world recently attended the fifth annual conference in Chicago, IL, sponsored by the Center for Culturally Responsive Evaluation and Assessment (CREA) in the College of Education at the University of Illinois at Urbana-Champaign. The Center has a mission to generate evidence for policymaking that is methodologically, culturally, and contextually defensible.

This year’s conference theme was Intersectionality as critical inquiry, method, and practice: Moving beyond nominal categories and false dichotomies in culturally responsive evaluation and assessment.

Intersectionality is the interconnectedness of social categories such as race, class, gender, sexual orientation, disability status, religion, and so on, which show up in society as overlapping and interdependent systems of privilege and disadvantage. Read more about intersectionality in this piece from Kimberlé Crenshaw, who coined the term.

In addition to presenting about Wilder Research’s efforts to build our capacity to conduct culturally responsive evaluation and research, we also attended sessions and met people who helped us think about and apply intersectionality and culturally responsive evaluation theory and practice in our work. Here are a few of our key takeaways.        

Nicole’s takeaway: Multicultural validity

I learned about multicultural validity, a concept that helps evaluators ensure they are attending to western notions of validity in social science research while also paying attention to culture and context, so as to not leave something out or over-interpret. Dr. Karen Kirkhart from Syracuse University developed this work to link validity theory with culture.

In reading more about this theory, I appreciate how it aligns with my professional experiences that point to the importance of establishing congruence between evaluation theory and methods and the cultural context in which the evaluation is taking place.

What does this mean in practice? My social science training tells me that a scientific random sample of 400 tribal citizens is adequate to statistically represent the attitudes and opinions of the entire population of the tribe. However, that doesn’t mean the tribal leaders will be satisfied with basing their decisions on a scientific random sample, especially when their approach tends to lean toward consensus-based decisions.

I appreciate this concept because I am looking for better ways to ensure rigor in my work, and I understand that many western methods and standards for validity don’t make sense or fully encompass everything necessary to tell a true and fair story for some individuals and groups.

Sera’s takeaway: Research in a cultural context

In addition to having a chance to meet and interact with colleagues from a variety of fields, I appreciated hearing from several presenters who worked with Indigenous communities; they emphasized a few key considerations:

  • Value many types of knowledge (e.g., experiential) when partnering with communities
  • Reframe the purpose of research as storytelling
  • Share histories and stories at the start of a partnership to build the relationship and encourage understanding

During his keynote, Dr. Eric Jolly of the St. Paul & Minnesota Foundations engaged the audience in a brief exercise that reminded us of the importance of situating research and evaluation within cultural context. In other words, we must continually reflect: Who is informing our work? Who is setting the priorities? By cultivating power and authority in the communities and partners with whom we work, we help to build more equitable and valid evaluations and assessments, and ensure that the outcomes of these projects truly serve the community.

Anna’s takeaway: Meaningful, collaborative research partnerships

I was particularly impressed by presenters from Alliance for Research in Chicagoland Communities, a collaboration of community- and faith-based organizations, public agencies, and faculty at Northwestern University, who have developed meaningful, collaborative research partnerships through the use of advisory boards, Community-based Participatory Research, and a variety of community engagement efforts. Their emphasis on the use of community advisory boards reminded me of Wilder’s Speaking for Ourselves study with and for immigrant and refugee communities in the Twin Cities that utilized an advisory board to improve the validity of the data collection and interpretation.

The collaborative’s website has an abundance of helpful resources for all parties involved in a research project. The site even includes a link to Wilder’s Collaboration Factors Inventory, a tool for evaluating the health of collaborations.

Looking ahead

We felt right at home at the CREA conference. Culturally responsive evaluation and research is something we think about a lot and strive to improve on with every project, organization, and community we work with. Understanding intersectionality is a critical piece to this puzzle.

Opportunities to network with and learn from international experts is just one way that we’re staying on top of our culturally responsive evaluation and research game here at Wilder Research.

Widgets

Intersectionality is the intentional disruption of a single story.

Joan LaFrance, CREA conference attendee