While the landscape of learning and evaluation in the philanthropic sector has shifted over time, there continue to be clear signals of the sector prioritizing learning as a mechanism for contributing to social change. In the past few years, there have been various resources and communities of practice developed to explicitly explore the role of learning in philanthropy in strategic adaptation in complex environments, navigating systems change, building more equitable processes and outcomes, and strengthening trust and accountability to the community partners and non-profit organizations we work with. With the myriad purposes and structures that strategic learning can have in an organization, a critical question that organizational learning leaders frequently revisit is, “What will it take for our learning practice to be fit for purpose?”
At Hopelab, we found ourselves exploring this question for the first time in 2021. As a researcher, investor, and convener supporting equity-centered solutions for young people’s mental health and well-being, Hopelab developed a deep learning ethos over its first 20 years of existence, yet had never had a formal learning function. At the end of 2021, the organization decided to take a big strategic shift away from building interventions to more broadly supporting youth mental health via impact investing, grantmaking, research, programs, and convenings. Given the many new ways we would be operating, the leadership team also recognized it was a ripe moment to formalize a learning and impact function at the organization to accompany our 2022-25 strategy cycle.

In building out the function, we have benefited immensely from other organizations pulling back the curtains and sharing the stories of what it has taken to build their organizational learning practice. In that spirit and in line with Emergent Learning, we’d like to “return learning to the system” and share some takeaways from our first three years of building a learning function that could be of value to other learning, impact, or evaluation professionals.
Build Balance: Best Practices vs. Pain Points
In building the function from scratch for our organization, I benefited immensely from being a programmatic staff member from 2017-21. This allowed me to leverage and adapt many of our existing norms, practices, and organizational culture toward building our learning practice. At the beginning, we conducted a situational assessment of organizational knowledge, needs, and concerns about a Learning and Impact (L&I) function. I spent a lot of time with resources like Engage R+D’s Field Guide for E&L Leaders and the Emergent Learning Community Project principles and tools, trying to develop a learning purpose and approach that would make sense for us.

As I began socializing and testing the approach, my colleagues’ biggest concern surfaced: the perceived tension between “doing the work” and “making time to learn.” So, I leaned more heavily toward noticing and solving the acute pain points they were encountering in their day-to-day by doing the following:
- Intentionally scrolling our internal Slack channels daily to identify places where L&I could be of service to a programmatic leader or team.
- Hosting monthly 1:1s with every programmatic lead to listen to what was feeling challenging or inspiring that month, then sharing back patterns emerging across the organization, and offering to facilitate follow-up, cross-functional, or team learning conversations to what felt most timely for the team.
- Leveraging spaces like quarterly staff retreats where people were already in learning and evaluative thinking mode to conduct learning conversations or test new tools.
In the early months, this led to a capacity challenge of learning practices that were being built in a pretty bespoke way for teams, initiatives, or departments. However, over time, making visible how teams used insights for decision-making, adaptation, or reporting created buy-in for org-level practices.
Walk Alongside Staff: Building Trust, Value & Ownership
I always remember a former colleague saying, “I don’t really understand if or why we need to do this, but I trust you, so I’ll try it.” That deep trust was a critical leverage point and set the foundation for trying things that were very new to our staff.
A prime example of trust as a foundational ingredient was around evaluative work. Given that the organization used to primarily develop and evaluate interventions, evaluation was often thought of as synonymous with an RCT (randomized controlled trial). The broader approach of evaluative thinking and varied methodological approaches aimed at learning for adaptation and decision-making were not on folks’ radars.

Early in building the L&I function, I dedicated a lot of time to broadening the aperture of evaluative thinking and approach. However, it got very jargony very quickly, and people often ended up being more confused by new language than excited about what it might make possible. So, we pivoted to experimenting by doing and narrating what we were doing along the way so that people acquired the language to talk about their learning practices and what value it created for them. This required both the willingness of the program staff to trust me in trying something new and my availability to provide deep accompaniment. We used real-life project needs and partners to explore different approaches such as developmental evaluation, learning agendas, emergent learning tables, collaborative data sensemaking, and learning logs.
Shoutout to our fantastic consultants whom we have worked with: Lisa Carpenter, Meaningful Observations, NineFold, Emergence Collective, and Intention 2 Impact. Dedicating time and resources to learning and evaluation partners who were “in it” alongside teams created a shared sense of value in the work and increased ownership from our programmatic staff.
Rally Champions: Bring Party Favors
Originally, one specific program team was an early adopter of learning practices, and I hoped that if we could start with one team and model the value, then buy-in across the organization would spread from there. However, what ended up being the catalyst for org-wide adoption was having champions at more of a cross-functional level: a chief of staff who could engage our leadership team in strategic learning loops and a program strategy lead who could support the operationalization and implementation of learning practices across programs. Both of those colleagues also played a critical role in helping “catch” and communicate learning-related needs and interests from all levels of the organization, allowing L&I to meet acute needs in a timely way and hold the bigger picture of organizational learning in service of our strategy and guiding star. Team-level champions were still critical; they could often talk about the utility and value of learning practice in a way that sparked curiosity amongst other programmatic colleagues.
The cross-functional champions that emerged have also played a role in producing “party favors.” Party favors are artifacts we produce after each learning conversation that serve as a tangible takeaway for our staff to use. Sometimes, it’s as simple as a list of bullet points with the key insights and next steps, so there is alignment on the key takeaways from the conversation. Other times, such as following a conversation where various projects shared their lessons learned from recent recruitment and application processes, we created an org-wide guide on recruitment and application, leveraging the insights and key considerations for each stage of recruitment, application, and selection that could be used by subsequent projects moving forward.
Looking Forward: Building on Our Learning Journey
We recently hosted a look-back session where we took a virtual gallery walk with our staff to reflect on what we had tried together in our organizational learning practice and what we wanted to do more or less of moving forward. We’ll be carrying these lessons learned with us into our next strategic cycle, building on the places where we found initial success, iterating on approaches that left room for improvement, and continuing to center the question of “what learning practices do the work we are engaged in and the communities we are accountable to need us to lean into?” We are grateful to continue to learn from and with our peer organizations and learning colleagues in this space!
Arianna Taboada (she/her) is the Director of Learning & Impact at Hopelab where she stewards the organization’s equity-centered learning and evidence-gathering processes to inform programmatic decision making across its impact investing, grant-making, research, and field-building efforts. Arianna is a public health social worker by training and earned her MSW and MSPH from the University of North Carolina at Chapel Hill. She can be reached via email at ataboada@hopelab.org and LinkedIn.
