[[ノート/ノート]]~
訪問者 &counter();  最終更新 &lastmod();

リンク > [[ノート/トリーシミラリティ周辺で検索してみた]]

<<[[産総研一杉氏のDeep Learning関連用語集メモ〜便利〜:https://staff.aist.go.jp/y-ichisugi/rapid-memo/deep-learning.html#DBN]]>>

**ディープラーニング周辺で検索してみた 2014-03-06  [#sd35befc]
<とりあえず、順不同です>

[[人工知能学会誌にて連載解説「Deep Learning(深層学習)」 2013年5月号から2014年3月までの予定:http://www.kamishima.net/2013/05/edit-tutorial-deeplearning/]]~
それぞれの記事のオンラインコピーはありません。ハードコピーを図書館経由で(他の図書館から)入手可能だと思います。

-連載解説「Deep Learning(深層学習)」にあたって(解説「Deep Learning(深層学習)」〔第1 回〕) 神嶌敏弘,松尾豊
-ディープボルツマンマシン入門 ―ボルツマンマシン学習の基礎―(解説「Deep Learning(深層学習)」〔第1 回〕) 安田宗樹
-
-大規模 Deep Learning (深層学習) の実現技術(<連載解説>Deep Learning(深層学習)[第3回])  岡野原大輔 学会誌 Vol. 28 No. 5 (2013 年9月)
-画像認識のための深層学習(連載解説「Deep Learning(深層学習)」〔第4 回〕) 岡谷 貴之


2014-09-02追加 人工知能学会誌 Vol.29.No.4 (2014/07) [[私のブックマーク Deep Learning (中山浩太郎):http://www.ai-gakkai.or.jp/my-bookmark_vol29-no4/]]

[[Deep Learning 輪読会 2013:https://sites.google.com/site/deeplearning2013/]]

[[Deep Learningのサイト:http://deeplearning.net]]

[[Deep Learningの軽い紹介 v1.0:http://www.slideshare.net/yoshihisamaruya/dnn-deep-learningv10]]

[[深層学習入門(ダヌシカ・ボレガラ):http://www.slideshare.net/bollegala/ss-39065162]]

[[英文wikipedia Deep Learning:http://en.wikipedia.org/wiki/Deep_learning]]

[[これもある意味Deep Learning,Recurrent Neural Network Language Modelの話 [MLAC2013_9日目]:http://kiyukuta.github.io/2013/12/09/mlac2013_day9_recurrent_neural_network_language_model.html]]

[[Deep Learning論文紹介「Deep learning via Hessian-free optimization:http://d.hatena.ne.jp/nishiohirokazu/20140208/1391838220]]

[[自然言語処理のためのDeep Learning at Gunosy:http://sssslide.com/www.slideshare.net/yutakikuchi927/deep-learning-26647407]]


[[Deep Learning Tutorials:http://www.deeplearning.net/tutorial/]]  
更に2つのチュートリアルをリンク
-[[brief introduction to Machine Learning for AI:http://www.iro.umontreal.ca/~pift6266/H10/notes/mlintro.html]]
-[[introduction to Deep Learning algorithms:http://www.iro.umontreal.ca/~pift6266/H10/notes/deepintro.html]]


(Deepではない2層ネット) [[パターン認識と機械学習(PRML)まとめ:http://aidiary.hatenablog.com/entry/20100829/1283068351]]、  [[多層パーセプトロンによる関数近似:http://aidiary.hatenablog.com/entry/20140122/1390395760]]、 [[多層パーセプトロンが収束する様子:http://aidiary.hatenablog.com/entry/20140123/1390478589]]、 [[多層パーセプトロンで手書き数字認識:http://aidiary.hatenablog.com/entry/20140201/1391218771]]、 [[多層パーセプトロンでMNISTの手書き数字認識:http://aidiary.hatenablog.com/entry/20140205/1391601418]]



-その他、モノグラフとして
  [[Learning Deep Architectures for AI:http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/239]]


[[Conversational Speech Transcription Using Context-Dependent Deep Neural Network:http://msr-waypoint.com/pubs/153169/CD-DNN-HMM-SWB-Interspeech2011-Pub.pdf]]

Collobert & Weston 2008 [[A unified architecture for natural language processing: deep neural networks with multitask learning:http://www.thespermwhale.com/jaseweston/papers/unified_nlp.pdf]]

Collobert, Wetson et al 2000 [[Natural Language Processing (almost) from Scratch:http://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/35671.pdf]] Journal of Machine Learning Research 1 (2000) 1-48


[[上記の論文の勉強会スライドhttp://www.slideshare.net/alembert2000/deep-learning-6]]

[[Deep Learning via Semi-Supervised Embedding:http://cse.iitk.ac.in/users/cs671/2013/hw3/weston-ratle-collobert-12_deep-learning-via-semi-supervised-embedding.pdf]]

[[New types of deep neural network learning for speech recognition and related applications: an overview:http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6639344&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6639344]]

[[Deep Learning技術の今:http://sssslide.com/www.slideshare.net/beam2d/deep-learning20140130]]

[[Deep Learning:http://sssslide.com/www.slideshare.net/tomohiromito/deep-learning-22425259]]

[[Deep Learningを実装する:http://sssslide.com/www.slideshare.net/tushuhei/121227deep-learning-iitsuka]]

[[実装ディープラーニング:http://sssslide.com/www.slideshare.net/yurieoka37/ss-28152060]]

[[learning Deep Architectures for AI (Bengio):http://sssslide.com/www.slideshare.net/alembert2000/learning-deep-architectures-for-ai-3-deep-learning]]

[[ニューラルネットの逆襲(岡野原):http://research.preferred.jp/2012/11/deep-learning/]] ポインタ多数

[[尾形研 研究プロジェクト/ディープラーニング:http://ogata-lab.jp/ja/projects_ja.html]]

[[ディープラーニングチュートリアルもしくは研究動向報告(岡谷@東北大):http://www.vision.is.tohoku.ac.jp/files/9313/6601/7876/CVIM_tutorial_deep_learning.pdf]]

[[一般向けのDeep Learning(岡野原):http://www.slideshare.net/pfi/deep-learning-22350063]]


PyConJP 2014 [[Deep Learning for Image Recognition in Python:http://www.slideshare.net/atelierhide/py-conjp2014-slideshare]] DeCAFを推奨??

PyConJP周辺の学習ネタ (DeCafを含む)
-[[検索結果:http://b.hatena.ne.jp/search/text?q=decaf]]
-[[Deep Learningと画像認識   〜歴史・理論・実践〜:http://www.slideshare.net/nlab_utokyo/deep-learning-40959442]]
-おまけ [[ガウス分布による単語と句の意味の分布的表現:http://www.anlp.jp/proceedings/annual_meeting/2014/pdf_dir/A7-2.pdf]]
-[[Development and Experiment of Deep Learning with Caffe and maf:http://www.slideshare.net/KentaOono/how-to-develop]]
-[[Caffeで手軽に画像分類:http://techblog.yahoo.co.jp/programming/caffe-intro/]]
--[2] [[Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell, "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition", arXiv:1310.1531:http://arxiv.org/pdf/1310.1531.pdf]]
--[3] Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation", CVPR2014, PDF
-[[Deep Learning 〜使いこなすために知っておきたいこと〜:http://www.slideshare.net/Takayosi/miru2014-tutorial-deeplearning-37219713]] << たくさん整理されている
-YouTube [[Pythonとscikit-learnで始める機械学習:https://www.youtube.com/watch?v=yp6LIjcZgoQ]]
-[[機械学習に役立つPythonライブラリ一覧:http://blog.negativemind.com/2014/06/06/%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92%E3%81%AB%E5%BD%B9%E7%AB%8B%E3%81%A4python%E3%83%A9%E3%82%A4%E3%83%96%E3%83%A9%E3%83%AA%E4%B8%80%E8%A6%A7/]]
-[[UCB-ICSI-Vision-Group/decaf-release:https://github.com/UCB-ICSI-Vision-Group/decaf-release/wiki]] GitHubのDeCafのソース

[[Deep Learning技術の今 (2014-01-30):http://www.slideshare.net/beam2d/deep-learning20140130]] 短いけれどざっとまとまっているように思う

<Theano>~
[[Theano:http://deeplearning.net/software/theano/]]

[[Theano を使った Deep Learning の実装:http://www.deeplearning.net/tutorial/]]

[[Deep Learning実行ツール紹介:http://wazalabo.com/wp-content/uploads/2014/09/20140905_section_1_isp.pdf]]  スライド 

([[自分用のメモ>ノート/theano]])

[[Theano 入門:http://www.chino-js.com/ja/tech/theano-rbm/]]


[[Deep Learningによるビッグデータ解析〜手法やCUDAによる高速化:http://on-demand.gputechconf.com/gtc/2014/jp/sessions/1003.pdf]]

<Ibisml>~
[[機械学習:http://ibisforest.org/index.php?%E6%A9%9F%E6%A2%B0%E5%AD%A6%E7%BF%92#d06fcc1e]] 

[[ディ−プラーニングの応用:http://ibisml.org/archive/ibis2013/pdfs/ibis2013-okatani.pdf]] 

<U-Montreal>~
[[A tutorial on Deep Learning:http://videolectures.net/jul09_hinton_deeplearn/]]

[[Introduction to Deep Learning Algorithms:http://www.iro.umontreal.ca/~pift6266/H10/notes/deepintro.html]]

[[Readings on Deep Networks:http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/ReadingOnDeepNetworks]]

[[Deep Networks bibliography:http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepNetworksBibliography]]

<?>~
[[(統数研公開セミナー)確率的トピックモデル:http://www.ism.ac.jp/~daichi/lectures/ISM-2012-TopicModels-daichi.pdf]]

<利用関係の記事>~
[[グーグルや百度が注力する「ディープ・ラーニング」とは何か?:http://gendai.ismedia.jp/articles/-/35512]]

[[【人工知能・AI】Googleが作ろうとしているのは人工知能を利用したさらに精度の高い検索エンジン:http://matome.naver.jp/odai/2137272658411877401]] (NAVERまとめ) 

[[ディープラーニングがビッグデータ分析の進化を引き起こす:http://www.mizuho-ir.co.jp/publication/column/2013/1119.html]] (みずほ情報総研) 

[[大脳皮質と deep learning の類似点と相違点:https://staff.aist.go.jp/y-ichisugi/rapid-memo/brain-deep-learning.html]] 

[[Deep learning 用語集:https://staff.aist.go.jp/y-ichisugi/rapid-memo/deep-learning.html]]

[[Andy Ng Deep Learning with COTS HPC Systems:http://cs.stanford.edu/people/ang/?page_id=414]]


<PyLearn2>~
[[pylearn2 Tutorial:http://deeplearning.net/software/pylearn2/tutorial/index.html#tutorial]]

[[pylearn2 Document and Installation:http://deeplearning.net/software/pylearn2/#download-and-installation]]

[[theano_rbm0.1ドキュメント(和訳?):http://www.chino-js.com/ja/tech/theano-rbm/#restricted-boltzmann-machine-rbm]]
 本文書ではPython用の数値計算ライブラリTheanoの使い方を説明します. 応用例としてRestricted Boltzmann Machineを実装します.

[[pylearn2 tutorial: Softmax regression:http://nbviewer.ipython.org/github/lisa-lab/pylearn2/blob/master/pylearn2/scripts/tutorials/softmax_regression/softmax_regression.ipynb]]

 This ipython notebook will teach you the basics of how softmax regression works, and show you how to do softmax regression in pylearn2.


[[Theano+Pylearn2でDeep Learningのお勉強&実験してみる:http://takatakamanbou.hatenablog.com/entry/2014/08/21/233000]]

<データなど>
-[[CIFAR-10/100:http://www.cs.toronto.edu/~kriz/cifar.html]] 32x32の様々な種類の多数の画像を集めたデータ。CIFAR-10は10種類×6000(5000教師+1000テスト)画像(種類は、airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck)、CIFAR-100は100種類×600画像。よく参照される。

-[[THE MNIST (Mixed National Institute of Standards and Technology) DATABASE of handwritten digits:http://yann.lecun.com/exdb/mnist/]] 70,000 (60,000教師+ 10,000テスト)手書き文字画像データと共に、単純なものから様々なDeep Learningまで試した結果も載っている。

ついでに
-[[(University of Oxford) 102 Category Flower Dataset:http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html]]
-[[Stanford Dogs Dataset:http://vision.stanford.edu/aditya86/ImageNetDogs/]]
-[[Caltech-UCSD Birds-200-2011:http://www.vision.caltech.edu/visipedia/CUB-200-2011.html]]
-[[大規模画像データセット:http://d.hatena.ne.jp/n_hidekey/20120115/1326613794]] 曰く~
最近は画像認識・検索で用いられるデータセットも大規模化が進んでいます。
いくつか代表的なものや最近見つけたものをまとめてみます。
(ここでの目安は、教師つきデータは10万枚以上、教師なしデータは100万枚以上のもの)

<おまけ2>~
[[ニューラルネットによる単語のベクトル表現の学習 〜 Twitterのデータでword2vecしてみた:http://yamitzky.hatenablog.com/entry/2014/03/11/222223]]

T. Mikolov et al (2013): [[Ef&#64257;cient Estimation of Word Representations in Vector Space:http://arxiv.org/pdf/1301.3781.pdf]] Proceegins of Workshop at ICLR 2013 と~
  [[論文紹介「Efficient Estimation of Word Representations in Vector Space」:http://qiita.com/nishio/items/3ac6f0ea598874644843]]、~
  [[Open Review:http://openreview.net/document/7b076554-87ba-4e1e-b7cc-2ac107ce8e4d]]

T. Mikolov et al (2013): [[Distributed Representations of Words and Phrases and their Compositionality:http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf]] Proceedings of NIPS 2013

[[NIPS2013読み会: Distributed Representations of Words and Phrases and their Compositionality:http://www.slideshare.net/unnonouno/nips2013-distributed-representations-of-words-and-phrases-and-their-compositionality]]

[[Distibuted Representation of Sentences and Documentsの解説(西尾氏のスライド):http://sssslide.com/www.slideshare.net/nishio/distributed-representation-of-sentences-and-documents]]

Turney, Pantel (2010) [[From Frequency to Meaning: Vector Space Models of Semantics:http://www.jair.org/media/2934/live-2934-4846-jair.pdf]]

Turney (2011) [[Distributional Semantics Beyond Words: Supervised Learning of Analogy and Paraphrase:http://www.transacl.org/wp-content/uploads/2013/10/paperno29.pdf]]

Word2Vec追加(2014-06-11)
-2014-01-25付け記事 [[NIPS2013読み会でword2vec論文の紹介をしました 先週、 @sla さん主催のNIPS2013読み会で、word2vec論文(正確には続報)の紹介をしました。:http://blog.unnono.net/2014/01/nips2013word2vec.html]]~
~
その中で引用されていた論文
--[[Efficient Estimation of Word Representations in. Vector Space. Tomas Mikolov. :http://arxiv.org/pdf/1301.3781.pdf]]
--[[Hierarchical Probabilistic Neural Network Language Model. Frederic Morin.:http://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats05.pdf]]

***Word2Vecの内部メカに関する追加記事 [#m7534763]

-[[
論文紹介「Distributed Representations of Words and Phrases and their Compositionality」:http://qiita.com/nishio/items/3860fe198d65d173af6b]]

-[[A Closer Look at Skip-gram Modelling:http://homepages.inf.ed.ac.uk/ballison/pdf/lrec_skipgrams.pdf]]

-[[Wikipedia N-gram:http://en.wikipedia.org/wiki/N-gram#Skip-Gram]]

-[[word2vec Explained: Deriving Mikolov et al.’s Negative-Sampling Word-Embedding Method:http://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf]]

***Statistical Semantics [#m75f76ed]

-[[Statistical Semantics入門〜分布仮説からWord2Vecmまで(PFI海野氏):http://www.slideshare.net/unnonouno/20140206-statistical-semantics]]

-[[Distributional Semantic Models NAACL0-HTL 2010 Tutorial (Evert):http://wordspace.collocations.de/doku.php/course:acl2010:start]]

-[[Distributional Semantics (Baroni):https://www.cs.utexas.edu/users/mooney/cs388/slides/dist-sem-intro-NLP-class-UT.pdf]]

-[[How to make words with vectors: Phrase generation in distributional semantics (Dinu and Baroni):http://clic.cimec.unitn.it/marco/publications/acl2014/dinu-baroni-generation-acl2014.pdf]]

-[[上記 How to make words with vectors: Phrase generation in distributional semantics (Dinu and Baroni) の紹介スライド(by 海野氏):http://www.slideshare.net/unnonouno/20140712-acl2014yomi]]

***手書き文字認識 [#na95f3a4]
[[PythonとDeep Learning 手書き文字認識:http://www.slideshare.net/mokemokechicken/pythondeep-learning]] PythonからTheanoまで使っての実例(スライドショウ)

***画像認識評価関連 [#o9bc04fa]
-[[Pascal VOC 〜 The PASCAL Visual Object Classes Homepage:http://pascallin.ecs.soton.ac.uk/challenges/VOC/]]
 The PASCAL VOC project:
 
 Provides standardised image data sets for object class recognition
 Provides a common set of tools for accessing the data sets and annotations
 Enables evaluation and comparison of different methods 
 Ran challenges evaluating performance on object class recognition (from 2005-2012,  now finished)
 Pascal VOC data sets
 
 Data sets from the VOC challenges are available through the challenge links below,
 and evalution of new methods on these data sets can be achieved through the  PASCAL
 VOC Evaluation Server.  The evaluation server will remain active even though the
 challenges have now finished.

-[[Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) Workshop:http://image-net.org/challenges/LSVRC/2013/iccv2013.php]]
 The purpose of the workshop is to present the methods and results of the Image Net 
 Large Scale Visual Recognition Challenge (ILSVRC) 2013 and the new Fine-Grained
 Challenge 2013. Challenge participants with the most successful and innovative
 entries will be invited to present.
 
 ILSVRC2013
 
 The ILSVRC2013 evaluates algorithms for object detection and image classification
 at large scale. This year there are three competitions:
 
 A PASCAL-style detection challenge on fully labeled data for 200 categories of objects,
 An image classification challenge with 1000 categories, and
 An image classification plus object localization challenge with 1000 categories.
 For more details, please visit ILSVRC 2013. Also, ILSVRC2013 results are now available.
 
 Fine-Grained ChallengeNEW
 
 The new Fine-Grained Challenge 2013 runs concurrently with ILSVRC2013 this year,
 and targets classification among categories which are both visually and
 semantically similar. For more details, please visit Fine-Grained Challenge 2013.

-[[ICCV-ILSVRC 2013 前回に引き続き、ImageNet Large-scale Visual Recognition Challenge (ILSVRC)についてリポートします。:http://d.hatena.ne.jp/nlab_utokyo/20140112]]

-[[Deep Learningと画像認識 〜歴史・理論・実践〜:http://www.slideshare.net/nlab_utokyo/deep-learning-40959442]] 中山英樹先生のスライド


[[PythonによるDeep Learningの実装(Logistic Regression編)2013/01/06:http://blog.yusugomori.com/post/39830852050/python-deep-learning-logistic-regression]]

[[PythonによるDeep Learningの実装(Restricted Boltzmann Machine編)2013/01/05:http://blog.yusugomori.com/post/39741567354/python-deep-learning-restricted-boltzmann-machine]]

画像認識の初歩(Deep Learningに直接関係ないが、画像認識の2006〜7年頃の技術で、時に参照されるので知っておきたかった) [[SIFT、SURF特徴量:http://www.slideshare.net/lawmn/siftsurf]]

Convolutional Neural NetworkのLeCunの論文(Proc. IEEE, Nov 1998) [[Gradient-based Learning Applied to Document Recognition:http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf]]

[[画像認識のための深層学習−深層学習の応用とCNNの基礎:http://www.slideshare.net/yomoyamareiji/ss-36982686]]

Maxout (Goodfellow) [[Maxout Networks:http://arxiv.org/pdf/1302.4389v4.pdf]] 活性化関数のsigmoid等の代りで、途中ノードを1段設けることで、任意関数が作れる

Maxout スライド [[Maxout Networks:http://www.slideshare.net/stjunya/maxout-networks]]

Krizhevsky [[ImageNet Classification with Deep Convolutional Neural Networks:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf]]

[[Deep Learningで猫の品種識別:http://qiita.com/wellflat/items/0b6b859bb275fd4526ed]] CaffeにてDeep CNN、ソースはGitHub。

(Caffe Tutorial) [[DYI Deep Learning for Vision: a Hands-On Tutorial with Caffe:https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/preview?sle=true&slide=id.p]]

CVPR 2014 (June 2014) [[Tutorial on Deep Learning For Vision:https://sites.google.com/site/deeplearningcvpr2014/]] 

Andrej Karpathy blog  [[Hacker's Guide to Neural Networks:http://karpathy.github.io/neuralnets/]]

**追加(2015-02-20) [#od3a6208]
-[[ibis2013岡谷「ディープラーニング」:http://ibisml.org/archive/ibis2013/pdfs/ibis2013-okatani.pdf]]

-[[Restricted Boltzmann Machine及びDeep Belief Networkの基本的な動作原理を知る、"A Practical Guide to Training Redstricted Boltzmann Machine"(GE Hinton, 2012)で黒魔術(RBMの性能を引き出すコツ)を学ぶ:http://qiita.com/t_Signull/items/f776aecb4909b7c5c116#rbm%E3%81%AE%E5%AD%A6%E7%BF%92%E3%81%AE%E6%B5%81%E3%82%8C]]

-[[RBMから考えるDeep Learning:http://qiita.com/t_Signull/items/f776aecb4909b7c5c116]]

-[[機械学習を知識ゼロから学ぶpdf:http://matome.naver.jp/odai/2137978900585239401]]

-[[RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料):http://www.slideshare.net/takumayagi/rbm-andlearning]]

-[[Theano で Deep Learning <6>: 制約付きボルツマンマシン <前編>:http://sinhrks.hatenablog.com/entry/2015/01/12/225149]]

-[[ゼロから始めるDeepLearning_その4_Restrictedボルツマンマシンとは:http://rishida.hatenablog.com/entry/2014/03/08/111330]]

-[[DIY Deep Learning for Vision: a Hands-on Tutorial with Caffe:https://docs.google.com/presentation/d/1UeKXVgRvvxg9OUdh_UiC5G71UMscNPlvArsWER41PsU/preview?sle=true&slide=id.p]]

-[[Deep Learning Tutorials denoising AutoEncoder編:http://nonbiri-tereka.hatenablog.com/entry/2014/05/15/030852]]

-[[DenoisingAutoEncoderを実装してみた(C++):http://nonbiri-tereka.hatenablog.com/entry/2014/09/25/111308]]

-[[How to use Caffe as a autoencoder by raw-image data type?:https://groups.google.com/forum/#!topic/caffe-users/MLzID3tEFIM]]

-[[Denoising Autoencodersにおける確率的勾配降下法(数式の導出):http://blog.yusugomori.com/post/42116682299/denoising-autoencoders]]

-[[Extracting and Composing Robust Features with Denoising Autoencoders:http://icml2008.cs.helsinki.fi/papers/592.pdf]]

-[[Deep Learning Tutorial (LISA lab, U Montreal):http://deeplearning.net/tutorial/deeplearning.pdf]]

-[[UFLDL Deep Learning Tutorial: This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning.:http://deeplearning.stanford.edu/tutorial/]]

-[[実装ディープラーニング(Yurie Oka):http://www.slideshare.net/yurieoka37/ss-28152060]]

-[[deep learning 日本語チュートリアル資料など:http://besom1.blog85.fc2.com/blog-entry-113.html]]

-[[Learning Deep Architectures for AI (Shohei Ohsawa):http://www.slideshare.net/alembert2000/learning-deep-architectures-for-ai-3-deep-learning]]


***Word2Vec周辺 [#mf66b573]
-Mikelov PhD slides [[Statistical Language Models Based on Neural Networks:http://www.fit.vutbr.cz/~imikolov/rnnlm/google.pdf]]
-Mikelov PhD thesis [[STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS:http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf]]
-Distributed representations of words and phrases and their compositionality
-Efficient estimation of word representaitons in vector space
-Distributed representation of semantics and documents
-Linguistic Regularities in continuous space word representations
-Extensions of recurrent neural network language model
-Distirubed representations (Hinton, 1984)
-A neural probabilistic languate model (Bengio, 2003)
-Deriving adjectiveal scales from continuous space word representations (Kim)
-A univied archibecture for natural language processing (Colobert and Weston)


***Learning Pipeline (UC Berkeley) [#e8f818ef]
-2015-03-13 [[Machine Learning Pipelines:http://www.slideshare.net/jeykottalam/pipelines-ampcamp]]
-[[ML Pipelines:https://amplab.cs.berkeley.edu/ml-pipelines/]]

**強化学習 [#t651f45b]
-2015-05-14 [[CaffeでDeep Q-Networkを実装して深層強化学習してみた:http://d.hatena.ne.jp/muupan/20141021/1413850461]]

-2015-05-14 [[ウィキペディア 強化学習:http://ja.wikipedia.org/wiki/%E5%BC%B7%E5%8C%96%E5%AD%A6%E7%BF%92]]
-2015-05-14 [[強化学習の基礎:http://www.jnns.org/niss/2000/text/koike2.pdf]]

-2015-05-14 [[強化学習:http://www.sist.ac.jp/~kanakubo/research/reinforcement_learning.html]]
-2015-05-14 [[強化学習入門:http://www.slideshare.net/mitmul/ss-20417728]]

トップ   編集 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS