2025-Winter-MGTA415-Analyzing Unstructured Data

Graduate Class, Rady Management School, UCSD, 2025

Class Time: Mondays, 2 to 4:50PM. Room: OTRSN 1E107 (1st week over Zoom). Piazza: piazza.com/ucsd/winter2025/mgta415

Online Lecturing

To offer waitlist students opportunities to learn more about this course, in the first week, we deliver the lecture over Zoom: https://ucsd.zoom.us/j/95861287987. The lecture will be recorded.

Overview

This course mainly focuses on introducing current methods and models that are useful in analyzing and mining real-world unstructured text data.

As the starting points, we will review basic machine learning models like linear regression and logistic regression. Then, we will cover traditional text preprocessing techniques, including tokenization, POS tagging, parsing, etc, using popular Python libraries as examples. After that, we will talk about more text analysis problems like text classification (e.g., sentiment analysis), information retrieval, topic modeling, word embedding, language models, etc.

This course will mainly focus on high-level understandings of these concepts and also provide the students handles to implement their own text analysis models (e.g., how to use 3rd-party libs, and how to set hyper-parameters).

After the midterm exam, we will take about more advanced text mining problems, such as phrase mining, named entity recognition, and taxonomy construction. We will go beyond the traditional supervised methods and put some emphasis on unsupervised, weakly supervised, and distantly supervised methods. Bootstrapping, comparative analysis, learning from seed words and existing knowledge bases will be the key methodologies to know. We will also explain ChatGPT without any formula and offer high-level insights about how to better utilize ChatGPT.

We will have a take-home midterm, a few homework assignments, a Kaggle-like competition, and a final (team-based) project. These four parts will have roughly the same weights.

There is no textbook required, but there are recommended readings for each lecture (at the end of the slides).

If you don’t have much experience in data mining, machine learning, etc. Here are some recommended textbooks to review.

Prerequisites

  • Math, Stats, and Coding
  • For Coding
    • We will mainly use Python
    • Sometimes, we will need to run some tools developed in C/C++ and Java
  • It’s a bonus if you already have knowledge about machine learning and data mining

Teaching Assistant

  • Siqi Wang
  • Feng Yao
  • Jinya Jiang

Office Hours

  • Jingbo Shang
  • Siqi Wang
    • Office Hour: TBD
  • Feng Yao
    • Office Hour: TBD
  • Jinya Jiang
    • Office Hour: TBD

Note: all times are in Pacific Time.

Grading

  • Homework: 8% each.
  • Midterm: 26%.
  • Data Mining Challenge: 25%.
  • Project: 25%.
  • You should complete all work individually, except for the Project.
  • Late submissions are NOT accepted.

Lecture Schedule

Recording Note: Please check out Canvas for recordings.

HW Note: All HWs due by the end of the day on the due date, i.e., 11:59 PM PT.

WeekDateTopic & SlidesEvents
101/06 (Mon)Intro and Text PreprocessingHW1 out
201/13 (Mon)Machine Learning Concepts and Basics 
301/20 (Mon)MLK Day (no class) 
401/27 (Mon)Text Classification using Bag-of-WordsDM Challenge out
502/03 (Mon)Word Embedding & Language Models: from N-Gram to Neural LMsHW1 due, HW2 out
602/10 (Mon)Information Retrieval & Topic Modeling 
702/17 (Mon)Presidents’ Day (no class)HW2 due, Midterm Exam Week
802/24 (Mon)Phrase Mining and its applicationsHW3 out, DM challenge due
903/03 (Mon)Image Classification 
1003/10 (Mon)Weakly Supervised Text ClassificationHW3 due

Homework (24%)

  • HW1: Text Pre-processing and Classification (8%). This homework mainly focuses on the impact of the pre-processing on the classification results.
  • HW2: Word Embedding and Language Models (8%). This homework mainly focuses on trying out the word embedding and n-gram language models.
  • HW3: Phrase Mining and Image Classification (8%). This homework mainly focuses on applying phrase mining to a given set of documents and also try out some image classification model.

Midterm (26%)

It is an open-book, take-home exam, which covers all lectures given before the Midterm. Most of the questions will be open-ended. Some of them might be slightly more difficult than homework. The exam will be available to start in a 24-hour window. Once started, the exam has a timer for 3 hours.

  • Start: Feb 18, 12 PM PT
  • End: Feb 19, 12 PM PT
  • Midterm problems: will be posted in Canvas.

Data Mining Challenge (25%)

It is a individual-based data mining competition with quantitative evaluation. The challenge runs from Feb 1 to Feb 28. Note that the time displayed on Kaggle is in UTC, not PT.

  • Challenge Statement, Dataset, and Details: TBD.
  • Kaggle challenge link: TBD.

Project (25%)

  • Team-Based Open-Ended Project
    • 1 to 4 members per team. More members, higher expectation.
    • Define your own research problem and justify its importance
    • Final Deliverables: Research Paper-like Report
      • Report due on Mar 16, End of the day, Pacific Time.
      • Write a 5 to 9 pages report (research-paper like following ACL template). The pages here do not include references.
      • Come up with your hypothesis and find some datasets for verification
      • Design your own models or try a large variety of existing models
      • Submit your codes and datasets; Github repos are welcome
      • Up to 5% bonus for working demos/apps towards the total course grades