I’ve been reading and thinking a bit about “collective impact” lately. (Here’s the seminal article introducing the buzzword.) It’s a solid, mostly-common-sense framework for thinking about collaborative/coalition efforts. There are five elements that define a “collective impact” approach:
- Common agenda. If you don’t have a shared vision for change, you can’t really expect to collaborate effectively.
- Mutually reinforcing activities. Successful collaborators need to coordinate their activities, play to their strengths, and know their role in the larger effort.
- Continuous communication. If you don’t communicate regularly you can’t hope to build enough trust and shared language to collaborate effectively.
At this point, you’re probably thinking, “Jon, why are you wasting my time with such obvious folderol?” Most coalition efforts I’ve seen fulfill these first three conditions pretty well. Hang in there, it’s the next two that are the most interesting:
- Shared measurement systems. Hmm, now we’re getting somewhere. Collective impact suggests that collaborative efforts need agree on a shared set of indicators of success and the systems for monitoring and reporting on those indicators. Without shared indicators, collaborators have no way to really know if they are succeeding or failing, and no feedback systems that allow them to “course correct” as needed.
- A backbone support organization. Proponents of collective impact assert that successful collaboration efforts need to have a strong, staffed organization at their center, in order to run the collaborative process with sufficient intensity and focus to drive it forward in the face of distractions. It’s not clear to me whether they think a strong “lead coalition partner” fulfills this condition or not. (I suspect not.)
It’s these last two points where most collaborations falter, and probably not concidental that they require sustained, long-term resource commitments. How do collaborations you’re involved with stack up?
In fifteen years of consulting work, I learned the hard way how to say “no” to projects. It was always a little bit painful, because, like most consultants, I was very dedicated to “being of service” and turning down a project always felt a little bit like a violation of that core value. But as I came to learn, you only harm yourself and the client by taking on a project that you suspect is teed up for failure, and over time, my colleagues came up with a pretty finely tuned set of requirements for what makes for a successful project.
Here’s short list of the reasons why consultants should sometimes say a polite but firm “no” to projects:
- Misaligned expectations around scope and budget (can go in either direction)
- Timeline + project scope exceeds consultant’s currently available project resources
- Client does not have sufficient project management/leadership resources available
- Client executives not strongly or clearly bought into the project
- Client not committed to consultant’s process/methodology
- Client technical needs aren’t a good fit for consultant’s core competencies, despite mission/attitude alignment
- Client indicates a desire for an “order-taker” type of implementer (“Consultant! Do what I say!”) rather than a deeper strategy + vision partnership. (The former is perfectly good work, but skilled consultants are usually more interested in the latter)
It’s nice to see that Washington Nonprofits, a statewide association of Washington nonprofit organizations, is finally getting off the ground. Washington has long been one of the few states that doesn’t have such an organization.
Sitting with an idle laptop on a UW wireless network here at the Evans School, I typically see a constant 40-50kb/sec of traffic flowing into my machine. At first I thought it was somebody attempting to hack or DDOS my laptop, but digging into the network packets with LittleSnitch showed me that all of this traffic was due to mDNS (Bonjour) broadcast traffic from other Apple machines on the network.
This seems like a huge waste of bandwidth and battery life to allow these network broadcasts. Apparently other university IT administrators agree; Princeton University filters mDNS traffic from its wireless networks. It would be nice to see UWIT do the same.
With the recent start of tolling on SR-520 here in Seattle, the public’s attention is suddenly on traffic volumes on 520 and I-90. So, this morning, I went over to the WSDOT website to see if I could find a simple listing of traffic volumes for the past few weeks. Nothing, just a few random numbers sprinkled in their press releases.
Obviously, WSDOT is collecting this data. It’s ridiculous that it’s not being published in formats that would make it easy to read and analyze. What a huge open government data fail.