[Events] Thoughts about evaluation metrics for open source comes to campus
Benj. Mako Hill
mako at atdot.cc
Thu Oct 11 21:49:29 UTC 2012
<quote who="Asheesh Laroia" date="Thu, Oct 11, 2012 at 03:59:24PM -0400">
> * Find ways to add behavioral metrics, such as looking at people's
> Github account activity.
Right. Any other way to track people's behavior aftewards without
having to rely on asking them for more information is great. IRC
participation, mailing list participation, etc. are all awesome and
very convincing.
> * He very much likes comparative measures, if we can cause people to
> randomly select into pools where they do and don't get to come to
> our workshops.
What you want is to ensure that your "treatment" effect you are trying
to estimate is "exogenous" relative to the outcomes you care
about. You might find that the people in your seminars get more
involved in free software afterwards, but those people were also
likely more motiviated and more likely to get involved anyway. The
best source of exogenonity is randomness.
My suggestion would be to get people to fill out a survey before
events (which you do) you should make a big list of everyone you think
will do well an event, and then you will have a lottery for those
people. This might also avoid the current situation where it's
possible that by selecting for the most interested people, you help
the people less likely to need it.
> One thing we could measure is account activity for people on sites
> like Github. Taken as an aggregate, and measured over time, you'll
> see a spike right at the workshop, and then some decay curve. Do the
> people who attend the workshop
If you ask people to create a GH or similar account *before* the
event, you could use that to look at the effect of the event on
subsequent behavior. And if you're randomly selecting the pool, you
can use the overflow group as a control. It's a randomized controlled
experiment of the effect of Hatch Opening!
> One thing we're planning on doing in the long run is asking, within
> a computer club that hosted us, via survey, how active people are.
> We were planning on waiting ~6 months after the actual event, which
> strikes Mako as a good idea because it smooths out certain kinds of
> bias if you ask people shortly after an event.
Right.
> Mako points out that even though he doesn't like survey data very
> much, it's definitely possible to use behavioral data to support the
> validity of survey data. We could end up being able to make claims
> like, "Based on observables, there's no significant difference
> between people who answer the survey, and those that do," which
> obviously would be a great thing to know.
Getting survey *before* your events seems like a great idea for doing
any before, after, and after-after comparisons.
Regards,
Mako
--
Benjamin Mako Hill
mako at atdot.cc
http://mako.cc/
Creativity can be a social contribution, but only in so far
as society is free to use the results. --GNU Manifesto
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://lists.openhatch.org/pipermail/events/attachments/20121011/3f7d95b4/attachment-0001.pgp>
More information about the Events
mailing list