User Tools

Site Tools


attention_workshop

Active Learning Workshop

    • Active Learning for Multi-Label Classification
    • Multi-Label Active Learning from Crowds, arXiv, 2015
    • Effective Multi-Label Active Learning for Text Classification, KDD, 2009
    • Active Learning with Multi-label SVM Classification, IJCAI, 2013
    • Active Query Driven by Uncertainty and Diversity for Incremental Multi-label Learning, ICDM, 2013
    • Active Learning by Querying Informative and Representative Examples, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014
    • Multi-Label Active Learning: Query Type Matters, IJCAI, 2015

Machine Reasoning Workshop

This workshop is prepared for Emotibot.

Date & Location

  • Date : 2017/11/15(三) 11:30 ~ 14:30
  • Location : NTU CSIE R340

Agenda

  • Deep Reasoning (NTM, DNC, RN)
  • Neural Program

Attention Workshop

This workshop is prepared for Emotibot.

Date & Location

  • Date : 2017/12/20(三) 11:30 ~ 14:30
  • Location : NTU CSIE R340

Agenda

* Whole PDF : Attention Workshop

Graph Embedding Workshop

This workshop is prepared for Emotibot.

Date & Location

  • Date : 2018/1/26(五) 12:00 ~ 15:00
  • Location : NTU CSIE R340

Agenda

Adversarial Examples Workshop

Date & Location

  • Date : 2018/3/7 (三) 12:00 ~ 15:00
  • Location : NTU CSIE R544

Agenda

  • Session 1 (45 mins) Introduction (Fred)Intro
  • Session 2 (45 mins) Defense (Alicia) Defense
  • Session 3 (45 mins) 實務 (Applications, Tools)(漪莛)Implementation

Deep Natural Language generation Workshop

Date & Location

  • Date : preferred: 4/12 Thursday 11:00-14:00
  • Location : R340

Agenda

Privacy-preserving Machine Learning Workshop

Date & Location

  • Date : 5/10 Thursday 11:00-14:00
  • Location : R324

Agenda

Content-based Recommendation Workshop

Date & Location

  • Date : 2018/10/12 Friday 14:00-16:30
  • Location:

Agenda

Talks in Emotibot

NameTalk TitleDate & Time AbstractSlides
邱德旺 Attention Model on Stance Classification 8/2 14:00~15:00 Stance classification is the task aiming to understand the two given inputs and determine the relation between the stance of them. One of the given inputs is a target claim, which is a statement about a certain target. The other is a headline or an article that agrees with, opposes to, or discusses the target claim. In this paper, we propose a model with the polarity classifier and attention mechanism, the Attention Model(AM) with similarity function to extract the important information from both short and long content. The experiments show that the proposed methods perform better than the baseline and competitors. Slides
林宗興 Finding Adversarial Examples for Text Classification: A Reinforcement Learning Approach 8/2 15:00~16:00 While deep neural network is becoming more and more popular and be widely used in many domain including natural language processing nowadays, there are some works discussing the vulnerability of deep models. Adversarial examples, a kind of synthetic data, somehow show the weakness of a machine learning model. In this work we aim to improve the efficiency of finding adversarial examples in text classification tasks. Moreover, we use the adversarial examples to do adversarial training and find that it may improve the generalization on the unseen data when the training data is not sufficient. slides
Chih-Te Lai Non-parallel Text Style Transfer by Latent Space Alignment 8/9 14:00-15:00 Style transfer is a popular topic in artificial intelligence research. However, several issues of non-parallel style transfer in natural language processing remain challenging. One is that separating styles and content of texts is difficult, and current models have no proper mechanism to remain unrelated-style content of texts. Another problem is that main approaches focus on transfer styles between two aspects, since pairwise models should be created when dealing with transfer among multiple aspects. In this work, we propose an auto-encoder model including a unified generative adversarial network to transfer styles and novel latent regularization losses to preserve content representation in latent space. Empirical results show that our models achieve more diverse and general style transfer.slides
Zi-Pong Lim Deep Reinforcement Learning for Team Draft Recommendations in MOBA Games 8/9 15:00-16:00 Multiplayer Online Battle Arena (MOBA) is a genre of games in which two teams of players compete against each other for a certain objective. Both teams taking turns picking one draft at a time from a pool of characters before a match begins. In this paper, we propose a team draft recommendation system based on Deep Reinforcement Learning, that recommends drafts for a team, given current enemy drafts and ally drafts. All the experiments and results in this paper will be based on a popular MOBA called DOTA2.slides
Chao-Chung Wu An Attention Based Neural Network Model for Unsupervised Lyrics Rewriting 8/9 16:00-17:00 Creative writing has become a standard task to showcase the power of artificial intelligence. This work tackles a challenging task in this area, the lyrics rewriting. We rewrite the original lyrics to lyrics which are similar with the original lyrics in terms of segmentation, but user may designate different style of rewriting in PoS, rhyme and emotion as the rewritten lyrics. We propose a multi-encoder RNN based model for this task and do automatic evaluation and human study to evaluate the effectiveness of the model. Last but not least, we observe the attention changes during the training of model, and explain how the model learns the rhymes and PoS with our model structure. slides
attention_workshop.txt · Last modified: 2018/10/10 23:29 by cwtsai