2025-Winter-DSC148-Introduction to Data Mining

Undergraduate Class, HDSI, UCSD, 2025

Class Time: Tuesdays and Thursdays, 8 to 9:20 AM. Room: PCYNH 106 (1st week over Zoom). Piazza: piazza.com/ucsd/winter2025/dsc148

Online Lecturing

To offer waitlist students opportunities to learn more about this course, in the first week, we deliver the lectures over Zoom: https://ucsd.zoom.us/j/97017584161. These lectures will be recorded.

Overview

This course mainly focuses on introducing current methods and models that are useful in analyzing and mining real-world data. It will cover frequent pattern mining, regression & classification, clustering, and representation learning. No previous background in machine learning is required, but all participants should be comfortable with programming, and with basic optimization and linear algebra.

There is no textbook required, but here are some recommended readings:

Prerequisites

Math, Stats, and Coding: (CSE 12 or DSC 40B) and (CSE 15L or DSC 80) and (CSE 103 or ECE 109 or MATH 181A or ECON 120A or MATH 183)

TAs

  • Teaching Assistants: Zihan Wang (ziw224 AT ucsd.edu) and Yuwei Zhang (yuz163 AT ucsd.edu)

Office Hours

Note: all times are in Pacific Time.

Grading

  • Homework: 8% each. Your lowest (of four) homework grades is dropped (or one homework can be skipped).
  • Midterm: 26%.
  • Data Mining Challenge: 25%.
  • Project: 25%.
  • You should complete all work individually, except for the Project.
  • Late submissions are NOT accepted.

Lecture Schedule

Recording Note: Please download the recording video for the full length. Dropbox website will only show you the first one hour.

HW Note: All HWs due by the end of the day on the due date, i.e., 11:59 PM PT.

WeekDateTopic & SlidesEvents
101/07 (Tue)Introduction: Data Types, Tasks, and EvaluationsHW1 out
101/09 (Thu)Supervised - Least-Squares Regression and Logistic Regression 
201/14 (Tue)Supervised - Overfitting and RegularizationHW2 out
201/16 (Thu)Supervised - Support Vector MachineHW1 Due
301/21 (Tue)Supervised - Naive Bayes and Decision Tree 
301/23 (Thu)Supervised - Ensemble Learning: Bagging and Boosting 
401/28 (Tue)Cluster Analysis - K-Means Clustering & its VariantsHW2 Due, HW3 out
401/30 (Thu)Cluster Analysis - “Soft” Clustering: Gaussian Mixture 
502/04 (Tue)Cluster Analysis - Density-based Clustering: DBSCAN 
502/06 (Thu)Cluster Analysis - Principle Component AnalysisDM Challenge out
602/11 (Tue)Pattern Analysis - Frequent Pattern and Association Rules 
602/13 (Thu)Midterm (no class, 24 hours on this date) 
702/18 (Tue)Recommender System - Collaborative FilteringHW3 Due, HW4 out
702/20 (Thu)Recommender System - Latent Factor Models 
802/25 (Tue)Text Mining - Zipf’s Law, Bags-of-words, and TF-IDF 
802/27 (Thu)Text Mining - Advanced Text RepresentationsDM Challenge due
903/03 (Tue)Network Mining - Small-Worlds & Random Graph Models, HITS, PageRank 
903/05 (Thu)Network Mining - Personalized PageRank and Node Embedding 
1003/10 (Tue)Sequence Mining - Sliding Windows and Autoregression 
1003/12 (Thu)Text Data as Sequence - Named Entity RecognitionHW4 Due

Homework (24%)

Your lowest (of four) homework grades is dropped (or one homework can be skipped).

  • HW1: Concepts and Evaluations (8%). This homework mainly focuses on the data mining concepts and how to evaluate different tasks.
  • HW2: Regression and Classification (8%). This homework mainly focuses on regression and classification tasks.
  • HW3: Cluster and Pattern Analysis (8%). This homework mainly focuses on clustering methods and frequent pattern mining methods.
  • HW4: Applications (8%). This homework mainly focuses on recommender system, text mining, and network mining.

Midterm (26%)

It is an open-book, take-home exam, which covers all lectures given before the Midterm. Most of the questions will be open-ended. Some of them might be slightly more difficult than homework. You will have 24 hours to complete the midterm, which is expected for about 3 to 4 hours.

  • Start: Feb 13, 8 AM PT
  • End: Feb 14, 8 AM PT
  • Midterm problems download: TBD
  • Please make your submissions on Gradescope.

Data Mining Challenge (25%)

It is a individual-based data mining competition with quantitative evaluation. The challenge runs from Feb 6 to Feb 27. Note that the time displayed on Kaggle is in UTC, not PT.

  • Challenge Statement, Dataset, and Details: TBD
  • Kaggle challenge link: TBD

Project (25%)

Instructions for both choices will be available here. Project due on Sunday, Mar 15 End of the Day.

Here is a quick overview:

  • Choice 1: Team-Based Open-Ended Project
    • 1 to 4 members per team. More members, higher expectation.
    • Define your own research problem and justify its importance
    • Come up with your hypothesis and find some datasets for verification
    • Design your own models or try a large variety of existing models
    • Write a 4 to 8 pages report (research-paper like)
    • Submit your codes
    • Up to 5% bonus for working demos/apps towards the total course grade.
  • Choice 2: Individual-Based Deep Dive into Data Mining Methods
    • Implement a few models learned from this course from scratch.
    • Skeleton codes can be found here. Your work is more like “filling in blanks” following the TODOs outlined in the Jupyter-Notebook.
    • Each model has a point associated with it. 6 points required. Points for each model is available at the end of the instruction slides.
    • Write a report (pages based on points) describing your interesting findings.
    • Up to 5% bonus towards the total course grade. Roughly 1 point, 1%.

Sample project reports are here.