IIT Delhi at AAAI 2016

The thirtieth edition of AAAI’s flagship conference AAAI-2016 will be held in February and IIT Delhi will be well represented there. Three of the accepted papers have IIT Delhi authors. These are:

Scalable Training of Markov Logic Networks using Approximate Counting
Somdeb Sarkhel, Deepak Venugopal, Tuan Anh Pham, Parag Singla and Vibhav Gogate.

Summary: In this paper, we propose principled weight learning algorithms for Markov logic networks that can easily scale to much larger datasets and application domains than existing algorithms. The main idea in our approach is to use approximate counting techniques to substantially reduce the complexity of the most computation intensive sub-step in weight learning: computing the number of groundings of a first-order formula that evaluate to true given a truth assignment to all the random variables. We derive theoretical bounds on the performance of our new algorithms and demonstrate experimentally that they are orders of magnitude faster and achieve the same accuracy or better than existing approaches. This paper comes from a collaboration between IITD and UT Dallas.

Numerical Relation Extraction with Minimal Supervision.
Aman Madaan, Ashish Mittal, Mausam, Ganesh Ramakrishnan, Sunita Sarawagi.

Summary: Standard relation extraction focuses on relations where both arguments are entities like (Obama, president of, United States). This paper studies numerical relations, i.e., those where one argument is a quantity, e.g., (India, inflation rate, 4.41%). This is an important subclass that has not received much attention. Novel challenges arise in the case of numerical relation extraction. The paper identifies the challenges and presents two (one rule-based, and one probabilistic graphical model based) extractors for the problem. This paper comes from a collaboration between IITD and IITB.

Reactive Learning: Active Learning with Relabeling
Christopher H. Lin, Mausam, Daniel S. Weld.

All approaches to active learning label a data point only once. However, in a crowdsourced setting, the same training data point may be labeled multiple times to increase the confidence in the annotated label. In a budget-limited active learning scenario there is a tradeoff between labeling new training data points and increase the size of training set, and labeling existing data points and increase the accuracy of the training set. Existing algorithms such as uncertainty sampling can be stuck in infinite loops for this problem. We present a new set of algorithms called impact sampling and obtain excellent results on simulated and real-world datasets. This paper comes from a collaboration between IITD and UW.

Congratulations to all the authors! We’ll be asking them to share their slides and links to the final versions of the papers when these become available.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s