작도닷넷 블로그
작도닷넷 블로그

컴퓨터

2015년 학회 - 딥러닝

16/10/24 01:05(년/월/일 시:분)

2016년 올해 벌써 학회 시즌이 돌아왔다. 매년 10월 말에서 11월 초에 열리는 사내 임직원 대상 학회로, 작년에 기계학습에 관심이 많았는데 마침 요슈아 벤지오 교수님 등 저명한 분들이 오셔서 잘 듣고 왔다.

하여튼 그때 간단히 노트했던 것들을 키워드 위주로 여기에 적어두고자 한다.


Deep Learning Beyond Classification

- deep learning (DL)
Trends and Challenges of Deep Learning Prof. Yoshua Bengio
(University of Montreal)

pattern recognition
speech recognition
memory networks - for natural language
caption generation

batch normalization
mini batch
importance sampling

{un,semi}supervised learning
neuro-biotics

training procedure w/ HW implementation

visual attention to internal attention (sequentialing)
ex) input(French) -> output(English)
                time sequencing (AND gate)

attention mechanism
1. soft-attention : train by back-prop (fast)
2. stochastic hard-attiontion : noisy gradient (slower) reinforce base-line

end-to-end machine translation

3-inputs
1. higher-level RNN state
2. lower-level RNN state
3. previous attention pattern

image-to-text
caption generation

memory access
memorynet

batch normalization
must-have
small size

mini-batch importance sampling tricks
reduces (GPU) computational cost

unsupervised learning
VAEs - PRW
GANs - LAPGAN
ladder networks
100 -> topdown signal

challenges:
natural language understanding
reasoning & question answering (QA)
long-term dependencies


-Multimodal
Multi-modal Deep Learning Prof. Ruslan Salakhutdinov
(University of Toronto)

unlabeled data
drug discoveries: Merck, Novartis

zero-shot learning
caption generation

input -> CNN-LSTM encoder -> multimodal -> SC-NVM decoder

image generation from caption

Skip-thought
SemEval
MS Research Paraphrase Corpus

problems
one-shot problems


- DL & reinforcement learning
Deep Learning with Reinforcement Learning Prof. Joelle Pineau
(McGill University)

Dynamic System
Learning Agent

stall, reward : DS->LA
action : LA -> DS

supervised: input -> (desired) -> output
reinforce: input -> ( ) ->output -> environment -(reward)-> input, ( )

markov decision process
a       a
St-1 -> St -> St+1

state-action value function
Q(s,a) = R(s,a) (immediate reward)
         + sum ( ) (future expected sum of reward)
Q function -> act -> new transition -> (feedback to Q function)

regression
random forest -> deep learning

Atari 2600 (DeepMind)
Q function
state -> action -> reward
stochastic gradient training

Deep QN

Deep Q learning
- experience replay
- error clipping
- periodic updates to target value

double DQN

learning ∩ planning = RL(reinforce learning)

dual encode model

RNN -> ->
            AND gate ->
RNN -> ->

DL(deep learning)
dropout - full dropout policy, block dropout policy


- Scene understanding w/ DL
Scene Understanding with Deep Learning Prof. Antonio Torralba
(MIT)

computer vision

IMAGENET (2009)
based on WordNet
ontology
http://www.image-net.org

image detection
scene recognition

http://places.csail.mit.edu/demo.html

gaze following
saliency modeling

frames / scenario recognition


- Panel discussion
Panel Discussion : Deep Learning - Future Directions Moderator :
Prof. Ruslan Salakhutdinov
(Univ. of Montreal)

invest? maybe to drone, vision, health

pre-training?
-> After CNN(Alexnet), no more need?
-> still useful

http://www.xacdo.net/tt/rserver.php?mode=tb&sl=2567

이름
비밀번호
홈페이지 (없어도 됩니다)

비밀글로 등록
작도닷넷은 당신을 사랑합니다.

[이전 목록]   [1] ... [16][17][18][19][20][21][22][23][24] ... [2298]   [다음 목록]

최근 글

이웃로그 관리자 옛날 작도닷넷 태터툴즈 ©현경우(xacdo) since 2001