2023-Winter-DSC148-Introduction to Data Mining

Undergraduate Class, HDSI, UCSD, 2023

Class Time: Tuesdays and Thursdays, 9:30AM to 10:50AM. Room: WLH 2204 (1st week over Zoom). Piazza: piazza.com/ucsd/winter2023/dsc148

Online Lecturing

To offer waitlist students opportunities to learn more about this course, in the first week, we deliver the lecture over Zoom: https://ucsd.zoom.us/j/97017584161. The lecture will be recorded.


This course mainly focuses on introducing current methods and models that are useful in analyzing and mining real-world data. It will cover frequent pattern mining, regression & classification, clustering, and representation learning. No previous background in machine learning is required, but all participants should be comfortable with programming, and with basic optimization and linear algebra.

There is no textbook required, but here are some recommended readings:


Math, Stats, and Coding: (CSE 12 or DSC 40B) and (CSE 15L or DSC 80) and (CSE 103 or ECE 109 or MATH 181A or ECON 120A or MATH 183)

TAs and Tutors

  • Teaching Assistants: Dheeraj Mekala (dmekala AT eng.ucsd.edu) and Zilong Wang (ziw049 AT ucsd.edu)

Office Hours

Note: all times are in Pacific Time.


  • Homework: 8% each. Your lowest (of four) homework grades is dropped (or one homework can be skipped).
  • Midterm: 26%.
  • Data Mining Challenge: 25%.
  • Project: 25%.
  • You should complete all work individually, except for the Project.
  • Late submissions are NOT accepted.

Lecture Schedule

Recording Note: Please download the recording video for the full length. Dropbox website will only show you the first one hour.

HW Note: All HWs due before the lecture time 8:00 AM PT in the morning.

WeekDateTopic & SlidesEvents
101/10 (Tue)Introduction: Data Types, Tasks, and EvaluationsHW1 out
101/12 (Thu)Supervised - Least-Squares Regression and Logistic Regression 
201/17 (Tue)Supervised - Overfitting and RegularizationHW2 out
201/19 (Thu)Supervised - Support Vector MachineHW1 Due
301/24 (Tue)Supervised - Naive Bayes and Decision Tree 
301/26 (Thu)Supervised - Ensemble Learning: Bagging and Boosting 
401/31 (Tue)Cluster Analysis - K-Means Clustering & its VariantsHW2 Due, HW3 out
402/02 (Thu)Cluster Analysis - “Soft” Clustering: Gaussian Mixture 
502/07 (Tue)Cluster Analysis - Density-based Clustering: DBSCAN 
502/09 (Thu)Cluster Analysis - Principle Component AnalysisDM Challenge out
602/14 (Tue)Pattern Analysis - Frequent Pattern and Association Rules 
602/16 (Thu)Midterm (24 hours on this date) 
702/21 (Tue)Recommender System - Collaborative FilteringHW3 Due, HW4 out
702/23 (Thu)Recommender System - Latent Factor Models 
802/28 (Tue)Text Mining - Zipf’s Law, Bags-of-words, and TF-IDF 
803/02 (Thu)Text Mining - Advanced Text RepresentationsDM Challenge due
903/07 (Tue)Network Mining - Small-Worlds & Random Graph Models, HITS, PageRank 
903/09 (Thu)Network Mining - Personalized PageRank and Node Embedding 
1003/14 (Tue)Sequence Mining - Sliding Windows and Autoregression 
1003/16 (Thu)Text Data as Sequence - Named Entity RecognitionHW4 Due

Homework (24%)

Your lowest (of four) homework grades is dropped (or one homework can be skipped).

Midterm (26%)

It is an open-book, take-home exam, which covers all lectures given before the Midterm. Most of the questions will be open-ended. Some of them might be slightly more difficult than homework. You will have 24 hours to complete the midterm, which is expected for about 2 hours.

  • Start: Feb 16, 8 AM PT
  • End: Feb 17, 8 AM PT
  • Midterm problems download: here
  • Please make your submissions on Gradescope.

Data Mining Challenge (25%)

It is a individual-based data mining competition with quantitative evaluation. The challenge runs from Feb 9 to Mar 2. Note that the time displayed on Kaggle is in UTC, not PT.

  • Challenge Statement, Dataset, and Details: here
  • Kaggle challenge link: here

Project (25%)

Instructions for both choices will be available here. Project **due on Sunday, Mar 19 EOD**.

Here is a quick overview:

  • Choice 1: Team-Based Open-Ended Project
    • 1 to 4 members per team. More members, higher expectation.
    • Define your own research problem and justify its importance
    • Come up with your hypothesis and find some datasets for verification
    • Design your own models or try a large variety of existing models
    • Write a 4 to 8 pages report (research-paper like)
    • Submit your codes
    • Up to 5% bonus for working demos/apps towards the total course grade.
  • Choice 2: Individual-Based Deep Dive into Data Mining Methods
    • Implement a few models learned from this course from scratch.
    • Skeleton codes can be found here. Your work is more like “filling in blanks” following the TODOs outlined in the Jupyter-Notebook.
    • Each model has a point associated with it. 6 points required. Points for each model is available at the end of the instruction slides.
    • Write a report (pages based on points) describing your interesting findings.
    • Up to 5% bonus towards the total course grade. Roughly 1 point, 1%.

Sample project reports are here.