TRON Grand Hackathon Season 3 - Community Vote

(Not sure if I’m allowed to ping @admin.hackathon without actual violation report, sorry if not)

To ensure publicity and fair voting, it would be great if you share voting results in csv/sql (or other adequate machine-readable) format in public. I have an appropriate academic background and would really like to play with the data in R to look for outliers and suspicious activity. The data should be anonymised, of course, and only admins should check IP sources, nicknames etc. The fields that are necessary for basic statistical analysis, IMO, are the following: project ID, voter ID (random, to join multiple votes from one person together), vote timestamp, and whether the voter represents any of projects (such voters are definitely legitimate).

You can also share two datasets: initial votes (everything we see as of voting close time) and filtered votes (dataset after hackathon team removed suspicious votes).

As an example of why this is important, I did a quick check for one of possible questions. Null hypothesis (assumption): “The amount of link opens (shown near every link) for web3 track projects does not depend on their position in the list”. This hypothesis can be rejected (correlation analysis views ~ index shows a noticeable correlation - -0.48 linear and -0.49 log(views) ~ index, and chart reveals that the correlation is non-linear in fact, so the coefficient doesn’t account for all variation). Even comparison top vs bottom half has shown huge difference, with p<0.04 for assumption “first group has received less views” and p<0.08 - for “first and second halves received the same amount of views”). This data was collected by manually copying the numbers and removing row 4 (where the amount doesn’t show up, which is certainly a bug - I clicked that link).

This is, strictly speaking, unfair - though it doesn’t seem to heavily affect voting results, which also looks suspicious. And this effect could’ve been easily avoided by shuffling rows for every unique visitor (a few javascript lines). What I say above basically means that most voters opened first 5-6 links to examine the project in detail and judged remaining projects by name and pitch in the title only. This is … well, not great, if these 5-6 projects are always the same.

This is not required by any rules, of course, but will give you more participants’ trust. I want to be able to check whether the voting and your analysis were fair.

To sum up (if anyone is interested in my feedback, though I doubt so), I’m really frustrated by the organisation (first time participant here, with 3 other hackathon prizes before), and will definitely not come back to season 4 (and perhaps to Tron ecosystem too, but for different reasons which are broader than hackathon-related forum post and perhaps not worth discussing - I just have much more well-documented and convenient alternatives, plus this may be partially opinion-based). Public voting may be a good idea, but the rules for it have to be much more explicit and well-defined. However, at least some public data could slightly improve current situation - it also includes publishing all disqualification reports and explaining them when participants reasonably disagree with this decision.

5 Likes