[Events] Thoughts about evaluation metrics for open source comes to campus
asheesh at asheesh.org
Thu Oct 11 19:59:24 UTC 2012
Hey all eventers,
I had a conversation with Benjamin "Mako" Hill earlier today about how we
can more solidly answer the question, "Can we measure the positive impact
that campus.openhatch.org events have on participation?"
* Find ways to add behavioral metrics, such as looking at people's Github
* He very much likes comparative measures, if we can cause people to
randomly select into pools where they do and don't get to come to our
(This is a conversation log as extracted from paper notes + memory.)
Mako doesn't like surveys all that much.
He does like behavioral measures a lot -- if you can passively look at
people's activity, without them having to report it, you can be more sure
about what you're looking at.
He asks me, "People who come to these workshops -- they don't have
accounts on [sites like Github] yet, right?" I answer that they don't
generally have accounts on sites like Github.
One thing we could measure is account activity for people on sites like
Github. Taken as an aggregate, and measured over time, you'll see a spike
right at the workshop, and then some decay curve. Do the people who attend
It would be nice to be able to measure the impact on attendees vs.
non-attendees by randomly sorting people into "Yes, come" and "No, don't
come" based on factors *other* than enthusiasm. There are a couple of ways
to do that:
1. Ask people to create an account on e.g. Github during signup. During
event signup, if the event gets full, we can track both the people who
could make it and the people who looked great (by e.g. enthusiasm) but who
we had to turn down due to capping the size.
2. Similar to above -- tell people on signup, "If you can't make it to our
event, please sign up for a Github account and fill out this other form"
and use that to track people. Compare people who could make it with those
One thing we're planning on doing in the long run is asking, within a
computer club that hosted us, via survey, how active people are. We were
planning on waiting ~6 months after the actual event, which strikes Mako
as a good idea because it smooths out certain kinds of bias if you ask
people shortly after an event.
Mako points out that even though he doesn't like survey data very much,
it's definitely possible to use behavioral data to support the validity of
survey data. We could end up being able to make claims like, "Based on
observables, there's no significant difference between people who answer
the survey, and those that do," which obviously would be a great thing to
More information about the Events