• Home
  • About Us
    • Saga Story and Vision
  • Saga Services
  • Saga Method
  • Saga Insights
  • Contact Us
  • More
    • Home
    • About Us
      • Saga Story and Vision
    • Saga Services
    • Saga Method
    • Saga Insights
    • Contact Us
  • Sign In
  • Create Account

  • My Account
  • Signed in as:

  • filler@godaddy.com


  • My Account
  • Sign out

Signed in as:

filler@godaddy.com

  • Home
  • About Us
    • Saga Story and Vision
  • Saga Services
  • Saga Method
  • Saga Insights
  • Contact Us

Account


  • My Account
  • Sign out


  • Sign In
  • My Account
Saga Systems

Saga Insights

Most people think of evaluation as something technical. Formal. Often external.
Something that happens to organisations rather than with them.


But in reality, evaluation is something we all do every day.


Think about buying a car. Most people don't begin by asking for a technical specification sheet. They start by considering what they need the car for. Is it for a long commute? A family? Occasional use? From there, they look at kilometres, reliability, size, fuel efficiency, safety, colour, or cost; not because these measures are inherently important, but because they relate to what matters to them.


The same is true when choosing a partner, a school, a job, or a place to live. We weigh up characteristics, make trade-offs, and apply our own criteria - often unconsciously. 


We rarely say, "I am now evaluating," but that is exactly what we are doing.


From everyday judgement to organisational evaluation

In organisations, evaluation serves the same basic purpose, but at a different scale.

At its core, evaluation is the structured process of ethically informed inquiry into what is happening and why, to arrive at a judgement about the 'worth, merit or value' of something (Mertens & Wilson, 2013; Scriven, 2003–04). 


Programs, services, and initiatives are constantly being assessed, whether formally or informally. People notice who participates and who doesn't. They sense when something feels useful or when it doesn't. They adapt their behaviour based on what they see working on the ground.


Formal evaluation makes this process explicit. It establishes shared criteria, deliberately gathers evidence, and supports clearer decision-making. Done well, evaluation helps organisations move beyond assumptions and anecdotes, towards a more considered understanding of what is actually happening.


Importantly, evaluation is not just about proving whether something "worked". It is about learning, improvement, and informed choice.


Evaluation vs Research

Research and evaluation often use similar methods, but they serve different purposes.


Research is about building knowledge. It helps us understand how the world works, why things happen, and what tends to be true across different settings. Its value lies in what it contributes to broader understanding.


Evaluation is about action. It helps people understand how a specific program, service, or policy is working in a particular context, and what that means for decisions about what to do next.


In practice, this difference shapes the kinds of questions that need to be asked.

For example, research has already shown that telehealth can be an effective way to deliver care for certain conditions and populations. An evaluation of a telehealth service does not need to re-establish that remote consultations can work. That question has largely been answered by research.


Instead, evaluation focuses on questions such as:

  • Is this service improving access for the people it was designed to support?
  • Are people able to use it easily and confidently?
  • Is it improving continuity of care?
  • Is it working well enough, in this context, to continue, adapt, or expand?


The same pattern applies in many areas. Research shows that regular physical activity improves health. Evaluation then focuses on whether a specific program is helping people be active in ways that are feasible and sustainable for them. Research establishes what works in general; evaluation examines whether it is working here.


Where there is a strong research base, evaluation can be more focused and targeted. Where evidence is limited or contexts are changing, evaluation often needs to be more exploratory, helping build understanding and inform action.


In short:

  • Research helps us understand what works more broadly.
  • Evaluation helps us understand what is working here, for these people, and whether it is good enough for its intended purpose.
     

Why evaluation matters

In complex systems, particularly in health and social impact, outcomes rarely tell the whole story on their own.


Participation rates might be high, but engagement may be low.

Satisfaction scores may look strong, while lived experience tells a different story.
Key indicators may move in the right direction, even as inequities persist beneath the surface.


Evaluation helps surface these tensions. It brings together data, experience, and context that are of worth, merit and value, to support meaningful decisions. 


David Fetterman has said that evaluation is a form of empowerment to foster improvement and self-determination.


At Saga Systems, we see evaluation as an ethically informed inquiry process. It is not about producing a single definitive answer or proving something works in general. It is about understanding what is happening here, why it is happening, and what that means for decisions about what to sustain, adapt, or change — by finding the story within the system. 


In practice, evaluation rarely succeeds or fails because of methods alone.


More often, it succeeds or fails because of readiness; whether an organisation, a team, or the people involved are genuinely prepared to engage with evaluation and use what it produces.

In our experience, when organisations are ready, evaluation becomes a tool for learning, improvement, and informed decision-making. When they are not, even the most rigorous evaluation can struggle to have an impact.


Readiness is not about technical capability. It is not about having perfect data systems, detailed logic models, or prior evaluation experience. It is about whether people are willing and able to engage honestly with what is happening, including what is not working as intended.


Organisations that are ready for evaluation tend to share a few characteristics. They have clarity about why evaluation is being considered and what decisions it is meant to inform. They are open to learning, not just to demonstrating success. Staff and stakeholders feel safe enough to speak honestly about challenges, tensions, and trade-offs. Evaluation is seen as something to work with, not something being done to them.


When these conditions are present, evaluation questions tend to be clearer, findings are more trusted, and the likelihood of change increases. People recognise themselves in the results, and they are more willing to act on what the evidence suggests.


By contrast, when readiness is low, evaluation is often treated as a compliance exercise. Questions are framed to avoid risk. Data is collected, but findings are quietly sidelined. Reports may be delivered, but decisions remain unchanged. In these situations, the issue is rarely methodological. It is structural, cultural, or relational.


This is why, at Saga Systems, we are cautious about rushing into evaluation. Sometimes the most responsible step is not to design a study, but to pause and ask whether evaluation is the right next move — and if so, what kind of evaluation, at what scale, and for what purpose.


Supporting readiness might involve clarifying the purpose of evaluation, creating space for shared sense-making, or building confidence and capability within teams. It might also involve recognising that evaluation is not yet the answer, and that other forms of inquiry or reflection are needed first.


Rigour matters. But rigour without readiness rarely leads to meaningful use.


 The word system is used a lot at the moment — and at times, it can feel like a buzzword.


At Saga Systems, the word system is deliberately part of our name. It reflects how we understand the work we do, and how we approach questions about impact, access, and change.


There are many valid ways to define a system. Some focus on structures and policies. Others emphasise relationships, interactions, or patterns that emerge over time. These perspectives are not in conflict; they highlight different aspects of how outcomes are shaped.


For people trained in public health, this way of thinking is not new. From early on, system influences were framed through the social determinants of health. For many years, determinants such as housing, education, income, access to services, the built environment, and social connection have been used to explain why outcomes are uneven and persistent across communities. Traditionally, these factors were described as influencing outcomes. What has evolved is our ability to look more closely at how these influences interact — and to identify where, within that complexity, change may be possible.


At Saga Systems, we think about systems across three connected levels. Each level offers a different way to understand how outcomes are shaped and where change might be possible.


Macro: what the system makes normal, affordable, legal, or expected
This includes the wider conditions that shape what is possible across a population. It can involve laws and regulations, cost structures, workforce availability, infrastructure, market forces, and social norms.

Examples include:

  • Seatbelt laws are making road safety the norm rather than a choice
  • Smoking restrictions and plain packaging are changing what is socially acceptable
  • The cost of transport or housing shapes who can access services
  • Workforce shortages are affecting service availability across regions
     

These conditions often operate in the background, but they strongly influence behaviour and outcomes.


Meso: how services and organisations respond within those conditions
This level focuses on how services, organisations, and local systems function day to day within the broader environment. It includes coordination between services, referral pathways, information flow, and the design of access.

Examples include:

  • Services adjusting capacity in response to predictable demand
  • How referral pathways work between primary care and specialist services
  • Whether services share information in ways that reduce bottlenecks
  • Local transport or scheduling decisions affecting attendance
     

This is often where constraints become visible — and where practical adjustments can make a real difference.


Micro: how people experience and navigate those conditions day to day
At the micro level, systems are felt through lived experience. This includes time, money, responsibilities, trust, confidence, and the realities that shape daily decisions.

Examples include:

  • Wanting to attend a program but being unable to afford transport
  • Balancing work, caring responsibilities, and appointment times
  • Deciding whether a service feels accessible or “for people like me”
  • Navigating multiple services with limited time or energy
     

These experiences are not viewed in isolation. They reflect how the broader system and local services interact in people’s lives.


This way of thinking often becomes clearest through simple examples. A program may be well designed and well received by participants, yet still have limited reach. A physical activity program might run regularly, be consistently full, and receive strong feedback. At the same time, people in the community may say they cannot attend due to transport, timing, cost, or lack of awareness. In this situation, the program itself may be working well, but the surrounding system determines who can access it.


Similarly, pressure can appear in one part of a system even though its source sits elsewhere. A service experiencing growing demand may respond by increasing capacity when the underlying driver is a predictable change in activity elsewhere or a shift in referral patterns. Looking at how parts of the system connect can reveal options beyond simply working harder in the same place.


In some cases, small changes in how information flows or how services coordinate can have an outsized effect. Earlier visibility of demand, clearer pathways between services, or adjustments to how access is organised can prevent problems from compounding — without changing the program itself.


For Saga Systems, revisiting the system is not about stepping away from action or suggesting that meaningful change only happens at scale. It is about understanding why outcomes look the way they do, and using that understanding to support thoughtful, practical decision-making.


Sometimes change sits within a program.
Sometimes it sits between services.
Sometimes it sits in design choices that shape access, coordination, or anticipation of demand.


We acknowledge and feel grateful to live, work and play on the beautiful lands of the Dharawal people.  

Illawarra Shoalhaven - NSW Australia

  • Contact Us