Skip to content

Dhruva's Dumb Website

UT Invitational 2019

Science Olympiad2 min read

Astronomy C

Here are the exam materials, along with a score histogram. The mean score was 52%.

  • Astronomy C EXAM
  • Astronomy C KEY
  • mystery.fits

astro-scores

This year, I made the exam significantly easier. As a result, teams did pretty well on it; one team got close to a perfect score. According to a feedback form, the difficulty and length of the exam was about right for most teams. Keep in mind that more than 50% of competitors responded that they were first-time competitors in astronomy. I'm pretty satisfied with the resulting score distribution; writing "easier" exams is definitely smart for early-season competitions. The downside is that veteran teams have less challenging material to practice with.

For the first time in Astronomy C history, we've run the event with live Js9. Competitors were allowed to access the Js9 website and upload a data file for analysis. Teams really struggled with this. Most teams rated the difficulty of the Js9 question as 5/5. I don't think the actual questions were hard; the trick was knowing how to use Js9 in order to do the analysis. 75% of teams felt that it was hard to figure out where to learn how to use/interpret Js9, whereas 5% felt that there were enough resources to learn Js9. I agree that resources are scarce. You can find a "Getting Started" guide here.

As always, get in touch if you have any questions!

Data Science C

Competitors

Here are the exam materials. Not enough teams participated in the event to collect meaningful statistics, so I've skipped that. The mean score was about 50%.

Teams reported (on a survey) that the exam was about the right difficulty, while the coding challenges were a bit on the hard side. This is pretty consistent with the subscores I saw. Additionally, teams seemed to want more programming concepts on the written test, and more stats and data science on the coding challenges, which is actually the opposite of what I thought people would say. I think it makes sense for the advanced stats/ML stuff to be more conceptual, which is better suited for a written test, while the hands-on programming should emphasize programming literacy.

Many teams struggled with the coding challenges because they didn't remember some certain syntax or how to manipulate a certain data structure. This is what the unlimited notes are for. I think it's a smart strategy to print out the documentation for the basic python data structures and basic syntax help. Being familiar with the available methods can greatly speed up the amount of time it takes to code things. For example, some teams didn't like how tedious the problems felt; but they were only tedious if you implemented everything by hand. Using the available string methods instead of manually doing string manipulation saves a lot of time on your part, and also makes your code more readable. Long story short, be familiar with the methods and functions available to you. That's what you have unlimited notes for!

As always, get in touch if you have any questions!

© 2020 by Dhruva's Dumb Website. All rights reserved.
Theme by LekoArts