A B C D F H I K M N P Q S T U misc
quanteda-package | An R package for the quantitative analysis of textual data |
as.character.corpus | Get or assign corpus texts |
as.character.tokens | Coercion, checking, and combining functions for tokens objects |
as.corpus.corpuszip | Coerce a compressed corpus to a standard corpus |
as.dfm | Coercion and checking functions for dfm objects |
as.dictionary | Coercion and checking functions for dictionary objects |
as.igraph.fcm | Plot a network of feature co-occurrences |
as.list.dist | Coerce a dist object into a list |
as.list.tokens | Coercion, checking, and combining functions for tokens objects |
as.matrix.dfm | Coerce a dfm to a matrix or data.frame |
as.network.fcm | Plot a network of feature co-occurrences |
as.tokens | Coercion, checking, and combining functions for tokens objects |
as.tokens.list | Coercion, checking, and combining functions for tokens objects |
as.tokens.spacyr_parsed | Coercion, checking, and combining functions for tokens objects |
as.yaml | Convert quanteda dictionary objects to the YAML format |
bootstrap_dfm | Bootstrap a dfm |
c.tokens | Coercion, checking, and combining functions for tokens objects |
char_ngrams | Create ngrams and skipgrams from tokens |
char_segment | Segment texts on a pattern match |
char_tolower | Convert the case of character objects |
char_tortl | [Experimental] Change direction of words in tokens |
char_toupper | Convert the case of character objects |
char_trim | Remove sentences based on their token lengths or a pattern match |
char_wordstem | Stem the terms in an object |
collocations | Identify and score multi-word expressions |
convert | Convert a dfm to a non-quanteda format |
corpus | Construct a corpus object |
corpus.character | Construct a corpus object |
corpus.Corpus | Construct a corpus object |
corpus.corpus | Construct a corpus object |
corpus.data.frame | Construct a corpus object |
corpus.kwic | Construct a corpus object |
corpus_reshape | Recast the document units of a corpus |
corpus_sample | Randomly sample documents from a corpus |
corpus_segment | Segment texts on a pattern match |
corpus_subset | Extract a subset of a corpus |
corpus_trim | Remove sentences based on their token lengths or a pattern match |
data_char_sampletext | A paragraph of text for testing various text-based functions |
data_char_ukimmig2010 | Immigration-related sections of 2010 UK party manifestos |
data_corpus_dailnoconf1991 | Confidence debate from 1991 Irish Parliament |
data_corpus_inaugural | US presidential inaugural address texts |
data_corpus_irishbudget2010 | Irish budget speeches from 2010 |
data_dfm_LBGexample | dfm from data in Table 1 of Laver, Benoit, and Garry (2003) |
data_dfm_lbgexample | dfm from data in Table 1 of Laver, Benoit, and Garry (2003) |
data_dictionary_LSD2015 | Lexicoder Sentiment Dictionary (2015) |
dfm | Create a document-feature matrix |
dfm_compress | Recombine a dfm or fcm by combining identical dimension elements |
dfm_group | Combine documents in a dfm by a grouping variable |
dfm_keep | Select features from a dfm or fcm |
dfm_lookup | Apply a dictionary to a dfm |
dfm_remove | Select features from a dfm or fcm |
dfm_replace | Replace features in dfm |
dfm_sample | Randomly sample documents or features from a dfm |
dfm_select | Select features from a dfm or fcm |
dfm_smooth | Weight the feature frequencies in a dfm |
dfm_sort | Sort a dfm by frequency of one or more margins |
dfm_subset | Extract a subset of a dfm |
dfm_tfidf | Weight a dfm by _tf-idf_ |
dfm_tolower | Convert the case of the features of a dfm and combine |
dfm_toupper | Convert the case of the features of a dfm and combine |
dfm_trim | Trim a dfm using frequency threshold-based feature selection |
dfm_weight | Weight the feature frequencies in a dfm |
dfm_wordstem | Stem the terms in an object |
dictionary | Create a dictionary |
docfreq | Compute the (weighted) document frequency of a feature |
docnames | Get or set document names |
docnames<- | Get or set document names |
docvars | Get or set document-level variables |
docvars<- | Get or set document-level variables |
fcm | Create a feature co-occurrence matrix |
fcm_compress | Recombine a dfm or fcm by combining identical dimension elements |
fcm_keep | Select features from a dfm or fcm |
fcm_remove | Select features from a dfm or fcm |
fcm_select | Select features from a dfm or fcm |
fcm_sort | Sort an fcm in alphabetical order of the features |
fcm_tolower | Convert the case of the features of a dfm and combine |
fcm_toupper | Convert the case of the features of a dfm and combine |
featnames | Get the feature labels from a dfm |
head.corpus | Return the first or last part of a corpus |
head.dfm | Return the first or last part of a dfm |
is.collocations | Identify and score multi-word expressions |
is.dfm | Coercion and checking functions for dfm objects |
is.dictionary | Coercion and checking functions for dictionary objects |
is.fcm | Create a feature co-occurrence matrix |
is.kwic | Locate keywords-in-context |
is.phrase | Declare a compound character to be a sequence of separate pattern matches |
is.tokens | Coercion, checking, and combining functions for tokens objects |
kwic | Locate keywords-in-context |
metacorpus | Get or set corpus metadata |
metacorpus<- | Get or set corpus metadata |
metadoc | Get or set document-level meta-data |
metadoc<- | Get or set document-level meta-data |
ndoc | Count the number of documents or features |
nfeat | Count the number of documents or features |
nfeature | Count the number of documents or features |
nscrabble | Count the Scrabble letter values of text |
nsentence | Count the number of sentences |
nsyllable | Count syllables in a text |
ntoken | Count the number of tokens or types |
ntype | Count the number of tokens or types |
phrase | Declare a compound character to be a sequence of separate pattern matches |
quanteda | An R package for the quantitative analysis of textual data |
quanteda_options | Get or set package options for quanteda |
spacyr-methods | Extensions for and from spacy_parse objects |
spacy_parse.corpus | Extensions for and from spacy_parse objects |
sparsity | Compute the sparsity of a document-feature matrix |
tail.corpus | Return the first or last part of a corpus |
tail.dfm | Return the first or last part of a dfm |
textmodel_affinity | Class affinity maximum likelihood text scaling model |
textmodel_ca | Correspondence analysis of a document-feature matrix |
textmodel_lsa | Latent Semantic Analysis |
textmodel_nb | Naive Bayes classifier for texts |
textmodel_wordfish | Wordfish text model |
textmodel_wordscores | Wordscores text model |
textplot_influence | Influence plot for text scaling models |
textplot_keyness | Plot word keyness |
textplot_network | Plot a network of feature co-occurrences |
textplot_scale1d | Plot a fitted scaling model |
textplot_wordcloud | Plot features as a wordcloud |
textplot_xray | Plot the dispersion of key word(s) |
texts | Get or assign corpus texts |
texts<- | Get or assign corpus texts |
textstat_collocations | Identify and score multi-word expressions |
textstat_dist | Similarity and distance computation between documents or features |
textstat_frequency | Tabulate feature frequencies |
textstat_keyness | Calculate keyness statistics |
textstat_lexdiv | Calculate lexical diversity |
textstat_readability | Calculate readability |
textstat_simil | Similarity and distance computation between documents or features |
tokens | Tokenize a set of texts |
tokens_compound | Convert token sequences into compound tokens |
tokens_keep | Select or remove tokens from a tokens object |
tokens_lookup | Apply a dictionary to a tokens object |
tokens_ngrams | Create ngrams and skipgrams from tokens |
tokens_remove | Select or remove tokens from a tokens object |
tokens_replace | Replace types in tokens object |
tokens_sample | Randomly sample documents from a tokens object |
tokens_select | Select or remove tokens from a tokens object |
tokens_skipgrams | Create ngrams and skipgrams from tokens |
tokens_subset | Extract a subset of a tokens |
tokens_tolower | Convert the case of tokens |
tokens_tortl | [Experimental] Change direction of words in tokens |
tokens_toupper | Convert the case of tokens |
tokens_wordstem | Stem the terms in an object |
topfeatures | Identify the most frequent features in a dfm |
types | Get word types from a tokens object |
unlist.tokens | Coercion, checking, and combining functions for tokens objects |
+.tokens | Coercion, checking, and combining functions for tokens objects |