About the challenge:
The CuseHacks Datathon is Syracuse University’s annual 24-hour datathon. A datathon is a data-focused competition where students come together to analyze, visualize, and interpret complex datasets to uncover insights and solve real-world problems. CuseHacks is open to data enthusiasts, programmers, analysts, designers, and anyone curious about working with data! In addition to diving into exciting datasets, attendees will have the chance to network with industry professionals, attend hands-on workshops, and take part in fun activities.
Get started
Day 1: February 21st
8:00am - Breakfast starts
8:30am - Doors open
11:30am - Opening Ceremony
12:00 pm- Coding Begins!
12:30pm - Lunch starts!
3:30pm - Workshop: Winning a Datathon: From Basics to Best Practices!
6:00pm - Dinner starts!
9:00pm - Refuel with Energy Drinks
11:00pm - Movie night starts!
Day 2: February 22nd
9:30am - Breakfast snacks!
12:00pm - Coding Finishes
12:30pm - Lunch Begins
1:00pm - Judging Begins
3:30pm - Judging ends
4:00pm - Closing Ceremony (Winners announced, prizes given out)
Requirements
Tracks 1 and 2 Image and Text Classification
To be eligible for leaderboard ranking and prizes, participants must submit all of the following:
Model Predictions
Uploaded prediction outputs for the validation set in the specified format.
Reproducible Source Code
A link to a public repository containing all code required for data preprocessing, model training, and inference. The code must allow judges to reproduce the reported accuracy.
Technical Description
A brief report in PDF or Markdown format that clearly documents:
Preprocessing: Any data cleaning, normalization, tokenization, augmentation, or transformation steps applied to the dataset.
Architecture: Detailed description of model structure, including layers, design choices, and if applicable, the specific pretrained backbone used such as ResNet or GloVe.
Training Protocol: Hyperparameters, optimizers, learning rate schedules, batch sizes, number of epochs, validation strategy, and evaluation metrics.
Incomplete submissions may be excluded from leaderboard consideration.
Track 3 Urban Data Analysis and Prediction
Judging for Track 3 is based on the quality of insights, methodology, and communication rather than leaderboard performance.
To be eligible for awards, participants must :
Submit Reproducible Source Code
A link to a public repository containing all code required for data preprocessing, model training, and inference. The code must allow judges to reproduce the results.
Presentation
Teams must present their findings in front of a judging panel. Presentations should clearly explain the problem explored, analytical approach, insights discovered, and any predictive modeling performed.
Visualizations
All charts, maps, dashboards, or graphical representations used to support findings must be included. Visualizations should be clear, accurate, and directly tied to the presented insights.
Prizes
Gaming Monitor
Gaming monitors
Air Frier
An Air Frier
Gaming Keyboards
Gaming Keyboards
Gaming Mice
Gaming Mice
Gaming Headphones
Gaming Headphones
Anker Powerbanks
Anker Powerbanks
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
Jason Sharf
Web Development Instructor at Syracuse University
Aryan Apte
Workshop Lead at CuseHacks, and AI researcher at OSPO-SU
Christopher Dunham
Assistant Teaching Professor at the iSchool
Julie Hall
Computer Consultant at Syracuse University
Judging Criteria
-
Track 1: Validation Accuracy
Ranked by highest validation accuracy. Pass/Fail checks: category compliance (pretrained vs. scratch), zero LLM usage, and full reproducibility. Trained outputs and technical descriptions required. -
Track 2: Validation Accuracy
Ranked by highest validation accuracy. Pass/Fail checks: category compliance (non-LLM embeddings vs. scratch), zero LLM usage, and full reproducibility. Trained outputs and technical descriptions required. -
Track 3: Best Insight
Evaluated on insight relevance (10 pts), evidence strength (7 pts), clarity (5 pts), and originality (3 pts). -
Track 3: Best Trend Found
Judged on trend correctness (8 pts), depth of analysis (7 pts), multi-dataset usage (5 pts), and interpretability (5 pts). -
Track 3: Best Visualization
Evaluated on accuracy (8 pts), readability (7 pts), aesthetics (5 pts), and communication of findings (5 pts). -
Track 3: Best Prediction
Focuses on problem formulation (7 pts), data usage (6 pts), methodology (6 pts), and justification (6 pts). Reasoning is prioritized over raw accuracy.
Questions? Email the hackathon manager
Tell your friends
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
