07-07-2025, 12:28 AM
Natural language processing
text processing, tokenization
New
Rating: 4.5 out of 5
(1 rating)
288 students
1hr 20min of on-demand video
Description
This course provides a comprehensive introduction to Natural Language Processing (NLP) – a field at the intersection of computer science, artificial intelligence, and linguistics that focuses on the interaction between computers and human language.
Students will learn how machines process, analyze, and understand human language through text and speech. The course covers key NLP techniques such as text preprocessing, tokenization, part-of-speech tagging, named entity recognition, sentiment analysis, language modeling, and text classification. Through a series of projects and assignments, learners will get experience building real-world NLP applications such as:
Text summarizers
Spam filters
Sentiment analysis tools
Question answering systems
Chatbots
Key Topics Covered:
Basics of Natural Language Processing
Phases of NLP
Text Preprocessing (Tokenization, Stemming, Lemmatization, Stop word Removal)
Part-of-Speech (POS) Tagging
Feature extraction
Term frequency
Inverse document frequency
Named Entity Recognition (NER)
Sentiment Analysis
Text Classification
Language Modeling (n-grams, word embeddings)
recurrent neural networks
Long short term memory
Attention mechanisms
transformer based models
Introduction to Deep Learning for NLP (using RNNs, LSTMs, Transformers)
Practical Projects: Chatbots, Text Summarization, Machine Translation
By the end of this course, learners will be able to:
Understand the core concepts and challenges in Natural Language Processing.
Apply text preprocessing techniques (tokenization, stemming, stopword removal).
Implement feature extraction methods like Bag of Words (BoW), TF-IDF, and word embeddings (Word2Vec, GloVe).
Build and evaluate machine learning models for text classification and sentiment analysis.
Work with named entity recognition (NER) and part-of-speech (POS) tagging.
Develop language models and understand sequence modeling using RNNs, LSTMs, and transformer models.
Fine-tune and use pre-trained models like BERT for downstream NLP tasks.
https://www.udemy.com/course/natural-language-processing-c/
Enjoy!
text processing, tokenization
New
Rating: 4.5 out of 5
(1 rating)
288 students
1hr 20min of on-demand video
Description
This course provides a comprehensive introduction to Natural Language Processing (NLP) – a field at the intersection of computer science, artificial intelligence, and linguistics that focuses on the interaction between computers and human language.
Students will learn how machines process, analyze, and understand human language through text and speech. The course covers key NLP techniques such as text preprocessing, tokenization, part-of-speech tagging, named entity recognition, sentiment analysis, language modeling, and text classification. Through a series of projects and assignments, learners will get experience building real-world NLP applications such as:
Text summarizers
Spam filters
Sentiment analysis tools
Question answering systems
Chatbots
Key Topics Covered:
Basics of Natural Language Processing
Phases of NLP
Text Preprocessing (Tokenization, Stemming, Lemmatization, Stop word Removal)
Part-of-Speech (POS) Tagging
Feature extraction
Term frequency
Inverse document frequency
Named Entity Recognition (NER)
Sentiment Analysis
Text Classification
Language Modeling (n-grams, word embeddings)
recurrent neural networks
Long short term memory
Attention mechanisms
transformer based models
Introduction to Deep Learning for NLP (using RNNs, LSTMs, Transformers)
Practical Projects: Chatbots, Text Summarization, Machine Translation
By the end of this course, learners will be able to:
Understand the core concepts and challenges in Natural Language Processing.
Apply text preprocessing techniques (tokenization, stemming, stopword removal).
Implement feature extraction methods like Bag of Words (BoW), TF-IDF, and word embeddings (Word2Vec, GloVe).
Build and evaluate machine learning models for text classification and sentiment analysis.
Work with named entity recognition (NER) and part-of-speech (POS) tagging.
Develop language models and understand sequence modeling using RNNs, LSTMs, and transformer models.
Fine-tune and use pre-trained models like BERT for downstream NLP tasks.
https://www.udemy.com/course/natural-language-processing-c/
Enjoy!