WHAT YOU WILL LEARN
Use logistic regression, naïve Bayes, and phrase vectors to implement sentiment evaluation, full analogies & translate phrases.
Use dynamic programming, hidden Markov fashions, and phrase embeddings to implement autocorrect, autocomplete & determine part-of-speech tags for phrases.
Use recurrent neural networks, LSTMs, GRUs & Siamese networks in Trax for sentiment evaluation, textual content era & named entity recognition.
Use encoder-decoder, causal, & self-attention to machine translate full sentences, summarize textual content, construct chatbots & question-answering.
SKILLS YOU WILL GAIN
- Word2vec
- Machine Translation
- Sentiment Evaluation
- Transformers
- Consideration Fashions
- Phrase Embeddings
- Locality-Delicate Hashing
- Vector House Fashions
- Elements-of-Speech Tagging
- N-gram Language Fashions
- Autocorrect
- Phrase Embedding
About this Specialization
Pure Language Processing (NLP) is a subfield of linguistics, laptop science, and synthetic intelligence that makes use of algorithms to interpret and manipulate human language.
This expertise is likely one of the most broadly utilized areas of machine studying and is crucial in successfully analyzing large portions of unstructured, text-heavy information. As AI continues to increase, so will the demand for professionals expert at constructing fashions that analyze speech and language, uncover contextual patterns, and produce insights from textual content and audio.
By the tip of this Specialization, you can be able to design NLP purposes that carry out question-answering and sentiment evaluation, create instruments to translate languages and summarize textual content, and even construct chatbots. These and different NLP purposes are going to be on the forefront of the approaching transformation to an AI-powered future.
This Specialization is designed and taught by two specialists in NLP, machine studying, and deep studying. Younes Bensouda Mourri is an Teacher of AI at Stanford College who additionally helped construct the Deep Studying Specialization. Łukasz Kaiser is a Employees Analysis Scientist at Google Mind and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.
Utilized Studying Undertaking
This Specialization will equip you with machine studying fundamentals and state-of-the-art deep studying strategies wanted to construct cutting-edge NLP programs:
• Use logistic regression, naïve Bayes, and phrase vectors to implement sentiment evaluation, full analogies, translate phrases, and use locality-sensitive hashing to approximate nearest neighbors.
• Use dynamic programming, hidden Markov fashions, and phrase embeddings to autocorrect misspelled phrases, autocomplete partial sentences, and determine part-of-speech tags for phrases.
• Use dense and recurrent neural networks, LSTMs, GRUs, and Siamese networks in TensorFlow and Trax to carry out superior sentiment evaluation, textual content era, named entity recognition, and to determine duplicate questions.
• Use encoder-decoder, causal, and self-attention to carry out superior machine translation of full sentences, textual content summarization, question-answering, and to construct chatbots. Be taught T5, BERT, transformer, reformer, and extra with 🤗 Transformers!
0 Comments