Astronomie · English version

MWP classification stats and volunteers behavior

Volunteerism in Zooniverse does have the same structure as other volunteerism if it is about classifications. Usual a lot of volunteers do classify a small set of subjects. They drop by and try the project until they are bored. This can be one subject, ten subjects or one hundred subjects.
Less common are engaged volunteers that classify regular a few subjects until the end of the project, but this would be a desirable classification rate for a base of a few hundred volunteers that classify 10 images per day. This way every project would be done in some months or a year. But in reality the volunteers act after their desires and not after this logic.
A good part of every project is done by a few volunteers that have a lot of time and classify a lot (like myself). This can be used to make the classifications more useful sometimes, because they are “experts”. Same goes for the “regulars”.

Because most of the volunteers only classify if they want to, the classifications start with the launch at a high rate as “everyone” tries the project. In smaller project or in projects with a large support this high rate can be enough to classify all subjects. A launch can be announced by the dailyzoo (news-website of zooniverse), it appears at the project site of Zooniverse and there is usually an advertisement from supporting institutions. The more popular the theme is the more support it gets from other media.
The launch day/week is usually the peak of every project. Everyone does try the project and the second day/week a lot volunteers drop out of the project, but a lot new volunteers try for their first time. The number of new volunteers is lower than the droop-outs. This continues until it reaches a minimum.

Here are the classifications from the “Milky Way Project” phase 3 in weeks. The project aims to do more than two million classifications.

dbfeunfxcaergmt
MWP stats weeks, launches of zooniverse astronomy projects and important events

You can see that the launch is clearly the largest peak and after this the classifications decrease. But there appears one small peak. It appears in the week after the launch of Gravity Spy, a Zooniverse astronomy project that classifies “Glitches” from the gravitational wave interferometer LIGO.
This positive effect does appear for launches of astronomy projects. The volunteers who try the new project eventually get bored and try another old project “inside their field”.
A very positive effect was the launch of “backyard worlds: planet 9”, a project that uses WISE data to search for brown dwarfs and planet nine.
It increases form more than 15,000 in the week before the launch, to more than 25,000 in the week of the launch, to more than 35,000 in the week after the launch.
The backyard worlds project had a lot of advertisement and some of the volunteers did also try the “Milky Way Project” after they got bored with backyard worlds. This does have the effect that the classifications gradually increase, instead of a peak on the beginning, like the launch.

backyard worlds
MWP stats days – Launch of “backyard worlds: planet 9” at Feb-15-2017 (Feb-22-2017 was a technical glitch)

After the launch of backyard worlds there are a lot of other important events, like a “Deep Astronomy Coffee Hangout” on youtube where people were able to interact in a live steam via chat and twitter with the researchers. Together with the launch of two other astronomy projects this is a plateau of higher classifications.  After the Stargazing live event the classifications decrease like we are used from the launch of the MWP.

A clear peak appears in the middle when the researchers did send a “Help me!” E-mail to the volunteers. This is common for long projects that run low on classifications or that have a new set of subjects. It is very effective at the beginning, but it wears off fast with a similar dynamic like the launch.
Three other projects, Hubble: Hot Stars, Planet 4: Ridges and Supernova Sighting cannot be set into relation to the Milky Way Project.

For the Milky Way Project a desired classification rate of 4000-5000 classifications per day is not the real minimum of the Milky Way Project with about 1000-1500 classifications per day as a current minimum.
One possible origin is the difficulty of the project. A classification can require the parts:

  1. Is there something?
  2. What is it?
  3. Where is it?
  4. What is the shape of it?

The Milky Way Project does require all those parts while other projects do require only some parts. Beginners do often struggle with at least one part and this insecure feeling is a wall to becoming a regular or expert, which would increase the minimum classification rate.

Advertisements

3 thoughts on “MWP classification stats and volunteers behavior

  1. “this insecure feeling is a wall to becoming a regular or expert” : that’s exactly the wall I faced in this project, like in some other. Am I doing right or wrong is the main point I wonder when I participate to those projects, and when – after several hours of pratice – I reach the point when I’m feeling that I don’t know, I can’t stand the feeling of clicking “at random” and I abandon. A good example about the MWP are the “umbrella bow shocks” you talk about in another post: I felt it was bow schocks, mainly because of the driving star and because of logic (there must be bow schocks toward our direction), but it was in opposition with the field guide; for the bubbles, I could find so many, overlaping/crossing each others, that I thought I was probably wrong (finally I saw I wasn’t so wrong by reading an explanation on the blog of the team about bubbles, but this explanation came so late) ; for the yellow balls, it was not to be mistaken with other yellow balls that weren’t yellow balls (I’ve never found the solution)
    For me, these questions could be solved if I could have found much more examples given by the team, but I see there’s always very few examples in those projects, may be to not kill the “magic of the crowd” effect. On the other hand, I see on your article that you are looking rather for at least regular volunteers. Should the “magic crowd” effect be expected on any kind of project is something I wonder, because the crowd may also make the same mistakes, I see “coffee rings” everywhere, and I’m surely not the only one :)).
    Well, all that to say that I think you are very right when you talk about to feel secure or insecure with the classification one do. It was my problem, and I’m incline to think I’m not exceptionnal so others may have had the same problem.
    I will add that in this project, even if I have leave it since few months, I appreciate that the research team is really present on the Talk (which is not the case on all projects…), thanks.

    Like

    1. Thank you for your comment, PhilippeC. This feedback is really helpful.
      There are currently different approaches to address this problem in some projects, like simulated data in the PlanetHunters website or “Gold-classifications” for GravitySpy, real gravitational wave “chirps” and different “levels”. The Milky Way Project does have the “learning the ropes” workflow, but that might be too less (and we don’t have a field guide).
      Simulated Data is also used to see how complete the classifications are. Only recently I did read that the Milky Way Project does use simulated bubbles. I did not notice this at all. I also read in the talk that the different zoom-levels will be handled differnt in the analysis.
      But would something like an official Zooniveres Wiki be helpful? It would again require some people that manage this and volunteers that want to write and translate.

      Like

      1. I can’t say about a Zooniverse wiki, because I don’t see what would be the purpose (you may be more precise via email, if you wish). Until now, whatever was the project on zooniverse, I haven’t felt the need of a supplemental interface. I could say that the teams blogs, although few teams really use this easy solution, would be enough for my taste.

        Simulated datas may help a team to measure the quality of the classifications, but it doesn’t change what happens to a volunteer in front a picture and his/her doubts: it’s not a solution for the volunteers themselves.

        There’s no field guide in MWP, but there is a button “help me with this task”, in which you may see that big bow schocks are selected… but not the tiny (farest) ones… why ? Whatever hypothesis one makes with these examples, the doubt will remain.
        All I can say is that on MWP, I would have liked many more examples (positive and negative), and not only one or two “ideal” and easy ones. Why not twenty or thirty pictures, showing the experts choices?

        It doesn’t seems to be specific to MWP, it’s the same for almost all projects. So I thought (think) there is some fundamental statistic law that forbid such kind of too precise help, and didn’t wanted to insist on this matter, going to other citizen science projects that don’t have this problem (text transcription on wikisource or Smithsonian digital volunteers).

        But when I have read your article, I have made my comment, because I think more people would be less insecure if they had more examples, and if those examples could introduce some bias in the volunteers choices, it may be detected.

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s