... | ... | @@ -23,7 +23,7 @@ |
|
|
<tr>
|
|
|
<td rowspan="3">2019-07-02</td>
|
|
|
<td>Étienne</td>
|
|
|
<td><em>Tuto</em> <strong>[Attention is all you need](uploads/9ec074f041166a253b48912fb7170f02/slides.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Attention is all you need](uploads/9ec074f041166a253b48912fb7170f02/slides.pdf)</strong></td>
|
|
|
<td>
|
|
|
[encoder-decoder](https://arxiv.org/pdf/1406.1078.pdf), [attention](https://arxiv.org/pdf/1409.0473.pdf),
|
|
|
[attention dot product](https://arxiv.org/pdf/1508.04025.pdf),
|
... | ... | @@ -32,16 +32,16 @@ |
|
|
</td>
|
|
|
</tr><tr>
|
|
|
<td rowspan="2">Bruno</td>
|
|
|
<td><em>Tuto</em> <strong>[Contextualized Word Embeddings](uploads/0fd3eeca02620ebc4b30361d5ab68928/Contextualized_Embeddings.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Contextualized Word Embeddings](uploads/0fd3eeca02620ebc4b30361d5ab68928/Contextualized_Embeddings.pdf)</strong></td>
|
|
|
<td></td>
|
|
|
</tr><tr>
|
|
|
<td><em>Tuto</em> <strong>[Named Entity Recognition](uploads/0fd3eeca02620ebc4b30361d5ab68928/Contextualized_Embeddings.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Named Entity Recognition](uploads/0fd3eeca02620ebc4b30361d5ab68928/Contextualized_Embeddings.pdf)</strong></td>
|
|
|
<td></td>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td rowspan="3">2019-06-20</td>
|
|
|
<td>Étienne</td>
|
|
|
<td><em>Tuto</em> <strong>[Tâches et métriques](uploads/77297905aeaff5efd37d5476c270c15a/0.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Tâches et métriques](uploads/77297905aeaff5efd37d5476c270c15a/0.pdf)</strong></td>
|
|
|
<td>
|
|
|
[On Chomsky and the Two Cultures of Statistical Learning](http://norvig.com/chomsky.html)<br />
|
|
|
[Liste de tâches sur Wikipédia](https://en.wikipedia.org/wiki/Natural_language_processing#Major_evaluations_and_tasks), [BLEU sur Wikipédia](https://en.wikipedia.org/wiki/BLEU)<br />
|
... | ... | @@ -49,7 +49,7 @@ |
|
|
</td>
|
|
|
</tr><tr>
|
|
|
<td>Bruno</td>
|
|
|
<td><em>Tuto</em> <strong>[Word Embeddings](uploads/88a5bfd9a9c459accd00954dd3dced71/Word_embeddings.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Word Embeddings](uploads/88a5bfd9a9c459accd00954dd3dced71/Word_embeddings.pdf)</strong></td>
|
|
|
<td>
|
|
|
[(Bengio 2003) A Neural Probabilistic Language Model](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf)<br />
|
|
|
[(Collobert 2008) A unified architecture for natural language processing: Deep neural networks with multitask learning](http://www.thespermwhale.com/jaseweston/papers/unified_nlp.pdf)<br />
|
... | ... | @@ -57,7 +57,7 @@ |
|
|
</td>
|
|
|
</tr><tr>
|
|
|
<td>Edouard</td>
|
|
|
<td><em>Tuto</em> <strong>[Language model, LSTM et Subwords](uploads/5db67569d609d3c0d73189bcfd14b96d/Tuto_NLP_LSTM.pdf)</strong></td>
|
|
|
<td><em>[Tuto](https://lite.framacalc.org/tutonlp)</em> <strong>[Language model, LSTM et Subwords](uploads/5db67569d609d3c0d73189bcfd14b96d/Tuto_NLP_LSTM.pdf)</strong></td>
|
|
|
<td>
|
|
|
<strong>[LSTM](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.676.4320&rep=rep1&type=pdf)</strong>: [A search space odyssey](https://arxiv.org/pdf/1503.04069), [Capacity and trainability in recurrent neural networks](https://arxiv.org/pdf/1611.09913), [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)<br />
|
|
|
<strong>Tips and tricks</strong>: [On the State of the Art of Evaluation in Neural Language Models](https://arxiv.org/pdf/1707.05589), [Regularizing and Optimizing LSTM Language Models](https://arxiv.org/pdf/1708.02182)<br />
|
... | ... | @@ -240,7 +240,3 @@ |
|
|
</tr>
|
|
|
</tbody>
|
|
|
</table> |
|
|
|
|
|
## Archive
|
|
|
|
|
|
[Framacalc tutoriel](https://lite.framacalc.org/tutonlp) |