# Sequence Model

Used when input/output data is a sequence. Used on tasks such as speech recongition, music generation, sentiment classification, DNA sequence analysis, Machine translation, Video acivity recongition etc.&#x20;

Notation:

* $$x$$ : input. We use $$x^{<t>}$$ to denote the $$t$$'s sequence of the input.
* $$y$$: output. We use $$y^{<t>}$$ to denote the $$t$$'s sequence of the output.&#x20;
* $$T$$ can be used to denote length. $$T\_x$$, $$T\_y$$ denote length of input, output.
* $$X\_i, y\_i$$ can be used to denote $$i^{th}$$ training/testing data. $$T\_x^i, T\_y^i$$ denote its length.

Word representation: we can use a vocabulary vector, and then use one-hot encoding for all the words.

### RNN

Problem with standard NN: input, output can have different length. Doesn't share features learned across different positions of text.

RNN:&#x20;

![](/files/VNW6r14m7DtnXdZsDwTB)

Activation of previous input (word) can be used in the next one. So allow output of one node to affect subsequence nodes. (Unidirection RNN)

Forward prop: Initialize $$a^{<0>} = 0$$. $$a^{<t>} = g(w\_{aa}a^{<t-1>} + w\_{ax}x^{<t>} + b\_a)$$, $$\hat{y}^{<t>} = g(w\_{ya}a^{<t>} + b\_y)$$ where $$g$$ is an activation function, $$w\_{aa}$$ is the weight for learning calculating activating layer from activation. $$w\_{ax}$$ is weight to learn calculate a from x, and so on...

Common activation - tanh, sometimes relu for $$a$$ . For $$y$$ , can be sigmoid, or anything.(depend on problem)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://tianyi0216.gitbook.io/blog/nlp/sequence-model.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
