desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Apply preprocessing to a single text document. This should perform tokenization in addition to any other desired preprocessing steps. Args: text (str): document text read from plain-text file. Returns: iterable of str: tokens produced from `text` as a result of preprocessing.'
def preprocess_text(self, text):
for character_filter in self.character_filters: text = character_filter(text) tokens = self.tokenizer(text) for token_filter in self.token_filters: tokens = token_filter(tokens) return tokens
'Yield tuples of functions and their output for each stage of preprocessing. This is useful for debugging issues with the corpus preprocessing pipeline.'
def step_through_preprocess(self, text):
for character_filter in self.character_filters: text = character_filter(text) (yield (character_filter, text)) tokens = self.tokenizer(text) (yield (self.tokenizer, tokens)) for token_filter in self.token_filters: tokens = token_filter(tokens) (yield (token_filter, tokens))
'Iterate over the collection, yielding one document at a time. A document is a sequence of words (strings) that can be fed into `Dictionary.doc2bow`. Each document will be fed through `preprocess_text`. That method should be overridden to provide different preprocessing steps. This method will need to be overridden if the metadata you\'d like to yield differs from the line number. Returns: generator of lists of tokens (strings); each list corresponds to a preprocessed document from the corpus `input`.'
def get_texts(self):
lines = self.getstream() if self.metadata: for (lineno, line) in enumerate(lines): (yield (self.preprocess_text(line), (lineno,))) else: for line in lines: (yield self.preprocess_text(line))
'Yield n random documents from the corpus without replacement. Given the number of remaining documents in a corpus, we need to choose n elements. The probability for the current element to be chosen is n/remaining. If we choose it, we just decrease the n and move to the next element. Computing the corpus length may be a costly operation so you can use the optional parameter `length` instead. Args: n (int): number of documents we want to sample. seed (int|None): if specified, use it as a seed for local random generator. length (int|None): if specified, use it as a guess of corpus length. It must be positive and not greater than actual corpus length. Yields: list[str]: document represented as a list of tokens. See get_texts method. Raises: ValueError: when n is invalid or length was set incorrectly.'
def sample_texts(self, n, seed=None, length=None):
random_generator = (random if (seed is None) else random.Random(seed)) if (length is None): length = len(self) if (not (n <= length)): raise ValueError('n is larger than length of corpus.') if (not (0 <= n)): raise ValueError('Negative sample size.') for (i, sample) in enumerate(self.getstream()): if (i == length): break remaining_in_corpus = (length - i) chance = random_generator.randint(1, remaining_in_corpus) if (chance <= n): n -= 1 if self.metadata: (yield (self.preprocess_text(sample[0]), sample[1])) else: (yield self.preprocess_text(sample)) if (n != 0): raise ValueError('length greater than number of documents in corpus')
'Args: min_depth (int): minimum depth in directory tree at which to begin searching for files. The default is 0, which means files starting in the top-level directory `input` will be considered. max_depth (int): max depth in directory tree at which files will no longer be considered. The default is None, which means recurse through all subdirectories. pattern (str or Pattern): regex to use for file name inclusion; all those files *not* matching this pattern will be ignored. exclude_pattern (str or Pattern): regex to use for file name exclusion; all files matching this pattern will be ignored. lines_are_documents (bool): if True, each line of each file is considered to be a document. If False (default), each file is considered to be a document. kwargs: keyword arguments passed through to the `TextCorpus` constructor. This is in addition to the non-kwargs `input`, `dictionary`, and `metadata`. See `TextCorpus.__init__` docstring for more details on these.'
def __init__(self, input, dictionary=None, metadata=False, min_depth=0, max_depth=None, pattern=None, exclude_pattern=None, lines_are_documents=False, **kwargs):
self._min_depth = min_depth self._max_depth = (sys.maxsize if (max_depth is None) else max_depth) self.pattern = pattern self.exclude_pattern = exclude_pattern self.lines_are_documents = lines_are_documents super(TextDirectoryCorpus, self).__init__(input, dictionary, metadata, **kwargs)
'Lazily yield paths to each file in the directory structure within the specified range of depths. If a filename pattern to match was given, further filter to only those filenames that match.'
def iter_filepaths(self):
for (depth, dirpath, dirnames, filenames) in walk(self.input): if (self.min_depth <= depth <= self.max_depth): if (self.pattern is not None): filenames = (n for n in filenames if (self.pattern.match(n) is not None)) if (self.exclude_pattern is not None): filenames = (n for n in filenames if (self.exclude_pattern.match(n) is None)) for name in filenames: (yield os.path.join(dirpath, name))
'Yield documents from the underlying plain text collection (of one or more files). Each item yielded from this method will be considered a document by subsequent preprocessing methods. If `lines_are_documents` was set to True, items will be lines from files. Otherwise there will be one item per file, containing the entire contents of the file.'
def getstream(self):
num_texts = 0 for path in self.iter_filepaths(): with open(path, 'rt') as f: if self.lines_are_documents: for line in f: (yield line.strip()) num_texts += 1 else: (yield f.read().strip()) num_texts += 1 self.length = num_texts
'Initialize the corpus. Unless a dictionary is provided, this scans the corpus once, to determine its vocabulary. If `pattern` package is installed, use fancier shallow parsing to get token lemmas. Otherwise, use simple regexp tokenization. You can override this automatic logic by forcing the `lemmatize` parameter explicitly. self.metadata if set to true will ensure that serialize will write out article titles to a pickle file.'
def __init__(self, fname, processes=None, lemmatize=utils.has_pattern(), dictionary=None, filter_namespaces=('0',)):
self.fname = fname self.filter_namespaces = filter_namespaces self.metadata = False if (processes is None): processes = max(1, (multiprocessing.cpu_count() - 1)) self.processes = processes self.lemmatize = lemmatize if (dictionary is None): self.dictionary = Dictionary(self.get_texts()) else: self.dictionary = dictionary
'Iterate over the dump, returning text version of each article as a list of tokens. Only articles of sufficient length are returned (short articles & redirects etc are ignored). Note that this iterates over the **texts**; if you want vectors, just use the standard corpus interface instead of this function:: >>> for vec in wiki_corpus: >>> print(vec)'
def get_texts(self):
(articles, articles_all) = (0, 0) (positions, positions_all) = (0, 0) texts = ((text, self.lemmatize, title, pageid) for (title, text, pageid) in extract_pages(bz2.BZ2File(self.fname), self.filter_namespaces)) pool = multiprocessing.Pool(self.processes, init_to_ignore_interrupt) try: for group in utils.chunkize(texts, chunksize=(10 * self.processes), maxsize=1): for (tokens, title, pageid) in pool.imap(process_article, group): articles_all += 1 positions_all += len(tokens) if ((len(tokens) < ARTICLE_MIN_WORDS) or any((title.startswith((ignore + ':')) for ignore in IGNORED_NAMESPACES))): continue articles += 1 positions += len(tokens) if self.metadata: (yield (tokens, (pageid, title))) else: (yield tokens) except KeyboardInterrupt: logger.warn('user terminated iteration over Wikipedia corpus after %i documents with %i positions (total %i articles, %i positions before pruning articles shorter than %i words)', articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS) else: logger.info('finished iterating over Wikipedia corpus of %i documents with %i positions (total %i articles, %i positions before pruning articles shorter than %i words)', articles, positions, articles_all, positions_all, ARTICLE_MIN_WORDS) self.length = articles finally: pool.terminate()
'Initialize the corpus from a file. `fname_vocab` is the file with vocabulary; if not specified, it defaults to `fname.vocab`.'
def __init__(self, fname, fname_vocab=None):
IndexedCorpus.__init__(self, fname) logger.info(('loading corpus from %s' % fname)) if (fname_vocab is None): (fname_base, _) = path.splitext(fname) fname_dir = path.dirname(fname) for fname_vocab in [utils.smart_extension(fname, '.vocab'), utils.smart_extension(fname, '/vocab.txt'), utils.smart_extension(fname_base, '.vocab'), utils.smart_extension(fname_dir, '/vocab.txt')]: if path.exists(fname_vocab): break else: raise IOError('BleiCorpus: could not find vocabulary file') self.fname = fname with utils.smart_open(fname_vocab) as fin: words = [utils.to_unicode(word).rstrip() for word in fin] self.id2word = dict(enumerate(words))
'Iterate over the corpus, returning one sparse vector at a time.'
def __iter__(self):
lineno = (-1) with utils.smart_open(self.fname) as fin: for (lineno, line) in enumerate(fin): (yield self.line2doc(line)) self.length = (lineno + 1)
'Save a corpus in the LDA-C format. There are actually two files saved: `fname` and `fname.vocab`, where `fname.vocab` is the vocabulary file. This function is automatically called by `BleiCorpus.serialize`; don\'t call it directly, call `serialize` instead.'
@staticmethod def save_corpus(fname, corpus, id2word=None, metadata=False):
if (id2word is None): logger.info('no word id mapping provided; initializing from corpus') id2word = utils.dict_from_corpus(corpus) num_terms = len(id2word) else: num_terms = (1 + max(([(-1)] + id2word.keys()))) logger.info(("storing corpus in Blei's LDA-C format into %s" % fname)) with utils.smart_open(fname, 'wb') as fout: offsets = [] for doc in corpus: doc = list(doc) offsets.append(fout.tell()) parts = [('%i:%g' % p) for p in doc if (abs(p[1]) > 1e-07)] fout.write(utils.to_utf8(('%i %s\n' % (len(doc), ' '.join(parts))))) fname_vocab = utils.smart_extension(fname, '.vocab') logger.info(('saving vocabulary of %i words to %s' % (num_terms, fname_vocab))) with utils.smart_open(fname_vocab, 'wb') as fout: for featureid in xrange(num_terms): fout.write(utils.to_utf8(('%s\n' % id2word.get(featureid, '---')))) return offsets
'Return the document stored at file position `offset`.'
def docbyoffset(self, offset):
with utils.smart_open(self.fname) as f: f.seek(offset) return self.line2doc(f.readline())
'Iterate over the corpus at the given filename. Yields a bag-of-words, a.k.a list of tuples of (word id, word count), based on the given id2word dictionary.'
def __iter__(self):
with utils.smart_open(self.fname) as f: for line in f: (yield self.line2doc(line))
'Save a corpus in the Mallet format. The document id will be generated by enumerating the corpus. That is, it will range between 0 and number of documents in the corpus. Since Mallet has a language field in the format, this defaults to the string \'__unknown__\'. If the language needs to be saved, post-processing will be required. This function is automatically called by `MalletCorpus.serialize`; don\'t call it directly, call `serialize` instead.'
@staticmethod def save_corpus(fname, corpus, id2word=None, metadata=False):
if (id2word is None): logger.info('no word id mapping provided; initializing from corpus') id2word = utils.dict_from_corpus(corpus) logger.info(('storing corpus in Mallet format into %s' % fname)) truncated = 0 offsets = [] with utils.smart_open(fname, 'wb') as fout: for (doc_id, doc) in enumerate(corpus): if metadata: (doc_id, doc_lang) = doc[1] doc = doc[0] else: doc_lang = '__unknown__' words = [] for (wordid, value) in doc: if (abs((int(value) - value)) > 1e-06): truncated += 1 words.extend(([utils.to_unicode(id2word[wordid])] * int(value))) offsets.append(fout.tell()) fout.write(utils.to_utf8(('%s %s %s\n' % (doc_id, doc_lang, ' '.join(words))))) if truncated: logger.warning(('Mallet format can only save vectors with integer elements; %i float entries were truncated to integer value' % truncated)) return offsets
'Return the document stored at file position `offset`.'
def docbyoffset(self, offset):
with utils.smart_open(self.fname) as f: f.seek(offset) return self.line2doc(f.readline())
'Initialize the corpus from a file. Although vector labels (~SVM target class) are not used in gensim in any way, they are parsed and stored in `self.labels` for convenience. Set `store_labels=False` to skip storing these labels (e.g. if there are too many vectors to store the self.labels array in memory).'
def __init__(self, fname, store_labels=True):
IndexedCorpus.__init__(self, fname) logger.info(('loading corpus from %s' % fname)) self.fname = fname self.length = None self.store_labels = store_labels self.labels = []
'Iterate over the corpus, returning one sparse vector at a time.'
def __iter__(self):
lineno = (-1) self.labels = [] with utils.smart_open(self.fname) as fin: for (lineno, line) in enumerate(fin): doc = self.line2doc(line) if (doc is not None): if self.store_labels: self.labels.append(doc[1]) (yield doc[0]) self.length = (lineno + 1)
'Save a corpus in the SVMlight format. The SVMlight `<target>` class tag is taken from the `labels` array, or set to 0 for all documents if `labels` is not supplied. This function is automatically called by `SvmLightCorpus.serialize`; don\'t call it directly, call `serialize` instead.'
@staticmethod def save_corpus(fname, corpus, id2word=None, labels=False, metadata=False):
logger.info(('converting corpus to SVMlight format: %s' % fname)) offsets = [] with utils.smart_open(fname, 'wb') as fout: for (docno, doc) in enumerate(corpus): label = (labels[docno] if labels else 0) offsets.append(fout.tell()) fout.write(utils.to_utf8(SvmLightCorpus.doc2line(doc, label))) return offsets
'Return the document stored at file position `offset`.'
def docbyoffset(self, offset):
with utils.smart_open(self.fname) as f: f.seek(offset) return self.line2doc(f.readline())[0]
'Create a document from a single line (string) in SVMlight format'
def line2doc(self, line):
line = utils.to_unicode(line) line = line[:line.find('#')].strip() if (not line): return None parts = line.split() if (not parts): raise ValueError(('invalid line format in %s' % self.fname)) (target, fields) = (parts[0], [part.rsplit(':', 1) for part in parts[1:]]) doc = [((int(p1) - 1), float(p2)) for (p1, p2) in fields if (p1 != 'qid')] return (doc, target)
'Output the document in SVMlight format, as a string. Inverse function to `line2doc`.'
@staticmethod def doc2line(doc, label=0):
pairs = ' '.join((('%i:%s' % ((termid + 1), termval)) for (termid, termval) in doc)) return ('%s %s\n' % (label, pairs))
'If given, start training from the iterable `corpus` straight away. If not given, the model is left untrained (presumably because you want to call `update()` manually). `num_topics` is the number of requested latent topics to be extracted from the training corpus. `id2word` is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing. `workers` is the number of extra processes to use for parallelization. Uses all available cores by default: `workers=cpu_count()-1`. **Note**: for hyper-threaded CPUs, `cpu_count()` returns a useless number -- set `workers` directly to the number of your **real** cores (not hyperthreads) minus one, for optimal performance. If `batch` is not set, perform online training by updating the model once every `workers * chunksize` documents (online training). Otherwise, run batch LDA, updating model only once at the end of each full corpus pass. `alpha` and `eta` are hyperparameters that affect sparsity of the document-topic (theta) and topic-word (lambda) distributions. Both default to a symmetric 1.0/num_topics prior. `alpha` can be set to an explicit array = prior of your choice. It also support special values of \'asymmetric\' and \'auto\': the former uses a fixed normalized asymmetric 1.0/topicno prior, the latter learns an asymmetric prior directly from your data. `eta` can be a scalar for a symmetric prior over topic/word distributions, or a matrix of shape num_topics x num_words, which can be used to impose asymmetric priors over the word distribution on a per-topic basis. This may be useful if you want to seed certain topics with particular words by boosting the priors for those words. Calculate and log perplexity estimate from the latest mini-batch once every `eval_every` documents. Set to `None` to disable perplexity estimation (faster), or to `0` to only evaluate perplexity once, at the end of each corpus pass. `decay` and `offset` parameters are the same as Kappa and Tau_0 in Hoffman et al, respectively. `random_state` can be a numpy.random.RandomState object or the seed for one Example: >>> lda = LdaMulticore(corpus, id2word=id2word, num_topics=100) # train model >>> print(lda[doc_bow]) # get topic probability distribution for a document >>> lda.update(corpus2) # update the LDA model with additional documents >>> print(lda[doc_bow])'
def __init__(self, corpus=None, num_topics=100, id2word=None, workers=None, chunksize=2000, passes=1, batch=False, alpha='symmetric', eta=None, decay=0.5, offset=1.0, eval_every=10, iterations=50, gamma_threshold=0.001, random_state=None, minimum_probability=0.01, minimum_phi_value=0.01, per_word_topics=False):
self.workers = (max(1, (cpu_count() - 1)) if (workers is None) else workers) self.batch = batch if (isinstance(alpha, six.string_types) and (alpha == 'auto')): raise NotImplementedError('auto-tuning alpha not implemented in multicore LDA; use plain LdaModel.') super(LdaMulticore, self).__init__(corpus=corpus, num_topics=num_topics, id2word=id2word, chunksize=chunksize, passes=passes, alpha=alpha, eta=eta, decay=decay, offset=offset, eval_every=eval_every, iterations=iterations, gamma_threshold=gamma_threshold, random_state=random_state, minimum_probability=minimum_probability, minimum_phi_value=minimum_phi_value, per_word_topics=per_word_topics)
'Train the model with new documents, by EM-iterating over `corpus` until the topics converge (or until the maximum number of allowed iterations is reached). `corpus` must be an iterable (repeatable stream of documents), The E-step is distributed into the several processes. This update also supports updating an already trained model (`self`) with new documents from `corpus`; the two models are then merged in proportion to the number of old vs. new documents. This feature is still experimental for non-stationary input streams. For stationary input (no topic drift in new documents), on the other hand, this equals the online update of Hoffman et al. and is guaranteed to converge for any `decay` in (0.5, 1.0>.'
def update(self, corpus, chunks_as_numpy=False):
try: lencorpus = len(corpus) except: logger.warning('input corpus stream has no len(); counting documents') lencorpus = sum((1 for _ in corpus)) if (lencorpus == 0): logger.warning('LdaMulticore.update() called with an empty corpus') return self.state.numdocs += lencorpus if (not self.batch): updatetype = 'online' updateafter = (self.chunksize * self.workers) else: updatetype = 'batch' updateafter = lencorpus evalafter = min(lencorpus, ((self.eval_every or 0) * updateafter)) updates_per_pass = max(1, (lencorpus / updateafter)) logger.info('running %s LDA training, %s topics, %i passes over the supplied corpus of %i documents, updating every %i documents, evaluating every ~%i documents, iterating %ix with a convergence threshold of %f', updatetype, self.num_topics, self.passes, lencorpus, updateafter, evalafter, self.iterations, self.gamma_threshold) if ((updates_per_pass * self.passes) < 10): logger.warning('too few updates, training might not converge; consider increasing the number of passes or iterations to improve accuracy') job_queue = Queue(maxsize=(2 * self.workers)) result_queue = Queue() def rho(): return pow(((self.offset + pass_) + (self.num_updates / self.chunksize)), (- self.decay)) logger.info('training LDA model using %i processes', self.workers) pool = Pool(self.workers, worker_e_step, (job_queue, result_queue)) for pass_ in xrange(self.passes): (queue_size, reallen) = ([0], 0) other = LdaState(self.eta, self.state.sstats.shape) def process_result_queue(force=False): '\n Clear the result queue, merging all intermediate results, and update the\n LDA model if necessary.\n\n ' merged_new = False while (not result_queue.empty()): other.merge(result_queue.get()) queue_size[0] -= 1 merged_new = True if ((force and merged_new and (queue_size[0] == 0)) or ((not self.batch) and (other.numdocs >= updateafter))): self.do_mstep(rho(), other, (pass_ > 0)) other.reset() if ((self.eval_every is not None) and ((force and (queue_size[0] == 0)) or ((self.eval_every != 0) and (((self.num_updates / updateafter) % self.eval_every) == 0)))): self.log_perplexity(chunk, total_docs=lencorpus) chunk_stream = utils.grouper(corpus, self.chunksize, as_numpy=chunks_as_numpy) for (chunk_no, chunk) in enumerate(chunk_stream): reallen += len(chunk) chunk_put = False while (not chunk_put): try: job_queue.put((chunk_no, chunk, self), block=False, timeout=0.1) chunk_put = True queue_size[0] += 1 logger.info('PROGRESS: pass %i, dispatched chunk #%i = documents up to #%i/%i, outstanding queue size %i', pass_, chunk_no, ((chunk_no * self.chunksize) + len(chunk)), lencorpus, queue_size[0]) except queue.Full: process_result_queue() process_result_queue() while (queue_size[0] > 0): process_result_queue(force=True) if (reallen != lencorpus): raise RuntimeError("input corpus size changed during training (don't use generators as input)") pool.terminate()
'Args: model : Pre-trained topic model. Should be provided if topics is not provided. Currently supports LdaModel, LdaMallet wrapper and LdaVowpalWabbit wrapper. Use \'topics\' parameter to plug in an as yet unsupported model. topics : List of tokenized topics. If this is preferred over model, dictionary should be provided. eg:: topics = [[\'human\', \'machine\', \'computer\', \'interface\'], [\'graph\', \'trees\', \'binary\', \'widths\']] texts : Tokenized texts. Needed for coherence models that use sliding window based probability estimator, eg:: texts = [[\'system\', \'human\', \'system\', \'eps\'], [\'user\', \'response\', \'time\'], [\'trees\'], [\'graph\', \'trees\'], [\'graph\', \'minors\', \'trees\'], [\'graph\', \'minors\', \'survey\']] corpus : Gensim document corpus. dictionary : Gensim dictionary mapping of id word to create corpus. If model.id2word is present, this is not needed. If both are provided, dictionary will be used. window_size : Is the size of the window to be used for coherence measures using boolean sliding window as their probability estimator. For \'u_mass\' this doesn\'t matter. If left \'None\' the default window sizes are used which are: \'c_v\' : 110 \'c_uci\' : 10 \'c_npmi\' : 10 coherence : Coherence measure to be used. Supported values are: \'u_mass\' \'c_v\' \'c_uci\' also popularly known as c_pmi \'c_npmi\' For \'u_mass\' corpus should be provided. If texts is provided, it will be converted to corpus using the dictionary. For \'c_v\', \'c_uci\' and \'c_npmi\' texts should be provided. Corpus is not needed. topn : Integer corresponding to the number of top words to be extracted from each topic. processes : number of processes to use for probability estimation phase; any value less than 1 will be interpreted to mean num_cpus - 1; default is -1.'
def __init__(self, model=None, topics=None, texts=None, corpus=None, dictionary=None, window_size=None, coherence='c_v', topn=10, processes=(-1)):
if ((model is None) and (topics is None)): raise ValueError('One of model or topics has to be provided.') elif ((topics is not None) and (dictionary is None)): raise ValueError('dictionary has to be provided if topics are to be used.') if ((texts is None) and (corpus is None)): raise ValueError('One of texts or corpus has to be provided.') if (dictionary is None): if isinstance(model.id2word, FakeDict): raise ValueError("The associated dictionary should be provided with the corpus or 'id2word' for topic model should be set as the associated dictionary.") else: self.dictionary = model.id2word else: self.dictionary = dictionary self.coherence = coherence if (coherence in boolean_document_based): if is_corpus(corpus)[0]: self.corpus = corpus elif (texts is not None): self.texts = texts self.corpus = [self.dictionary.doc2bow(text) for text in self.texts] else: raise ValueError("Either 'corpus' with 'dictionary' or 'texts' should be provided for %s coherence.", coherence) elif (coherence in sliding_window_based): self.window_size = window_size if (self.window_size is None): self.window_size = SLIDING_WINDOW_SIZES[self.coherence] if (texts is None): raise ValueError("'texts' should be provided for %s coherence.", coherence) else: self.texts = texts else: raise ValueError('%s coherence is not currently supported.', coherence) self.topn = topn self._model = model self._accumulator = None self._topics = None self.topics = topics self.processes = (processes if (processes > 1) else max(1, (mp.cpu_count() - 1)))
'Internal helper function to return topics from a trained topic model.'
def _get_topics(self):
topics = [] if isinstance(self.model, LdaModel): for topic in self.model.state.get_lambda(): bestn = argsort(topic, topn=self.topn, reverse=True) topics.append(bestn) elif isinstance(self.model, LdaVowpalWabbit): for topic in self.model._get_topics(): bestn = argsort(topic, topn=self.topn, reverse=True) topics.append(bestn) elif isinstance(self.model, LdaMallet): for topic in self.model.word_topics: bestn = argsort(topic, topn=self.topn, reverse=True) topics.append(bestn) else: raise ValueError('This topic model is not currently supported. Supported topic models are LdaModel, LdaVowpalWabbit and LdaMallet.') return topics
'Accumulate word occurrences and co-occurrences from texts or corpus using the optimal method for the chosen coherence metric. This operation may take quite some time for the sliding window based coherence methods.'
def estimate_probabilities(self, segmented_topics=None):
if (segmented_topics is None): segmented_topics = self.segment_topics() if (self.coherence in boolean_document_based): self._accumulator = self.measure.prob(self.corpus, segmented_topics) else: self._accumulator = self.measure.prob(texts=self.texts, segmented_topics=segmented_topics, dictionary=self.dictionary, window_size=self.window_size, processes=self.processes) return self._accumulator
'Return list of coherence values for each topic based on pipeline parameters.'
def get_coherence_per_topic(self, segmented_topics=None):
measure = self.measure if (segmented_topics is None): segmented_topics = measure.seg(self.topics) if (self._accumulator is None): self.estimate_probabilities(segmented_topics) if (self.coherence in boolean_document_based): kwargs = {} elif (self.coherence == 'c_v'): kwargs = dict(topics=self.topics, measure='nlr', gamma=1) else: kwargs = dict(normalize=(self.coherence == 'c_npmi')) return measure.conf(segmented_topics, self._accumulator, **kwargs)
'Aggregate the individual topic coherence measures using the pipeline\'s aggregation function.'
def aggregate_measures(self, topic_coherences):
return self.measure.aggr(topic_coherences)
'Return coherence value based on pipeline parameters.'
def get_coherence(self):
confirmed_measures = self.get_coherence_per_topic() return self.aggregate_measures(confirmed_measures)
'Return a single topic as a formatted string. See `show_topic()` for parameters. >>> lsimodel.print_topic(10, topn=5) \'-0.340 * "category" + 0.298 * "$M$" + 0.183 * "algebra" + -0.174 * "functor" + -0.168 * "operator"\''
def print_topic(self, topicno, topn=10):
return ' + '.join([('%.3f*"%s"' % (v, k)) for (k, v) in self.show_topic(topicno, topn)])
'Alias for `show_topics()` that prints the `num_words` most probable words for `topics` number of topics to log. Set `topics=-1` to print all topics.'
def print_topics(self, num_topics=20, num_words=10):
return self.show_topics(num_topics=num_topics, num_words=num_words, log=True)
'`vw_path` is the path to Vowpal Wabbit\'s \'vw\' executable. `corpus` is an iterable training corpus. If given, training will start immediately, otherwise the model is left untrained (presumably because you want to call `update()` manually). `num_topics` is the number of requested latent topics to be extracted from the training corpus. Corresponds to VW\'s \'--lda <num_topics>\' argument. `id2word` is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing. `chunksize` is the number of documents examined in each batch. Corresponds to VW\'s \'--minibatch <batch_size>\' argument. `passes` is the number of passes over the dataset to use. Corresponds to VW\'s \'--passes <passes>\' argument. `alpha` is a float effecting sparsity of per-document topic weights. This is applied symmetrically, and should be set higher to when documents are thought to look more similar. Corresponds to VW\'s \'--lda_alpha <alpha>\' argument. `eta` is a float which affects the sparsity of topic distributions. This is applied symmetrically, and should be set higher when topics are thought to look more similar. Corresponds to VW\'s \'--lda_rho <rho>\' argument. `decay` learning rate decay, affects how quickly learnt values are forgotten. Should be set to a value between 0.5 and 1.0 to guarantee convergence. Corresponds to VW\'s \'--power_t <tau>\' argument. `offset` integer learning offset, set to higher values to slow down learning on early iterations of the algorithm. Corresponds to VW\'s \'--initial_t <tau>\' argument. `gamma_threshold` affects when learning loop will be broken out of, higher values will result in earlier loop completion. Corresponds to VW\'s \'--epsilon <eps>\' argument. `random_seed` sets Vowpal Wabbit\'s random seed when learning. Corresponds to VW\'s \'--random_seed <seed>\' argument. `cleanup_files` whether or not to delete temporary directory and files used by this wrapper. Setting to False can be useful for debugging, or for re-using Vowpal Wabbit files elsewhere. `tmp_prefix` used to prefix temporary working directory name.'
def __init__(self, vw_path, corpus=None, num_topics=100, id2word=None, chunksize=256, passes=1, alpha=0.1, eta=0.1, decay=0.5, offset=1, gamma_threshold=0.001, random_seed=None, cleanup_files=True, tmp_prefix=u'tmp'):
self.vw_path = vw_path self.id2word = id2word if (self.id2word is None): if (corpus is None): raise ValueError(u'at least one of corpus/id2word must be specified, to establish input space dimensionality') LOG.warning(u'no word id mapping provided; initializing from corpus, assuming identity') self.id2word = utils.dict_from_corpus(corpus) self.num_terms = len(self.id2word) elif (len(self.id2word) > 0): self.num_terms = (1 + max(self.id2word.keys())) else: self.num_terms = 0 if (self.num_terms == 0): raise ValueError(u'cannot compute LDA over an empty collection (no terms)') self.num_topics = num_topics self.chunksize = chunksize self.passes = passes self.alpha = alpha self.eta = eta self.gamma_threshold = gamma_threshold self.offset = offset self.decay = decay self.random_seed = random_seed self._initial_offset = offset self.tmp_dir = None self.tmp_prefix = tmp_prefix self.cleanup_files = cleanup_files self._init_temp_dir(tmp_prefix) self._model_data = None self._topics_data = None self._topics = None if (corpus is not None): self.train(corpus)
'Clear any existing model state, and train on given corpus.'
def train(self, corpus):
LOG.debug(u'Training new model from corpus') self.offset = self._initial_offset self._topics = None corpus_size = write_corpus_as_vw(corpus, self._corpus_filename) cmd = self._get_vw_train_command(corpus_size) _run_vw_command(cmd) self.offset += corpus_size
'Update existing model (if any) on corpus.'
def update(self, corpus):
if (not os.path.exists(self._model_filename)): return self.train(corpus) LOG.debug(u'Updating exiting model from corpus') self._topics = None corpus_size = write_corpus_as_vw(corpus, self._corpus_filename) cmd = self._get_vw_update_command(corpus_size) _run_vw_command(cmd) self.offset += corpus_size
'Return per-word lower bound on log perplexity. Also logs this and perplexity at INFO level.'
def log_perplexity(self, chunk):
vw_data = self._predict(chunk)[1] corpus_words = sum((cnt for document in chunk for (_, cnt) in document)) bound = (- vw_data[u'average_loss']) LOG.info(u'%.3f per-word bound, %.1f perplexity estimate based on a held-out corpus of %i documents with %i words', bound, numpy.exp2((- bound)), vw_data[u'corpus_size'], corpus_words) return bound
'Serialise this model to file with given name.'
def save(self, fname, *args, **kwargs):
if os.path.exists(self._model_filename): LOG.debug(u"Reading model bytes from '%s'", self._model_filename) with utils.smart_open(self._model_filename, u'rb') as fhandle: self._model_data = fhandle.read() if os.path.exists(self._topics_filename): LOG.debug(u"Reading topic bytes from '%s'", self._topics_filename) with utils.smart_open(self._topics_filename, u'rb') as fhandle: self._topics_data = fhandle.read() if (u'ignore' not in kwargs): kwargs[u'ignore'] = frozenset([u'_topics', u'tmp_dir']) super(LdaVowpalWabbit, self).save(fname, *args, **kwargs)
'Load LDA model from file with given name.'
@classmethod def load(cls, fname, *args, **kwargs):
lda_vw = super(LdaVowpalWabbit, cls).load(fname, *args, **kwargs) lda_vw._init_temp_dir(prefix=lda_vw.tmp_prefix) if lda_vw._model_data: LOG.debug(u"Writing model bytes to '%s'", lda_vw._model_filename) with utils.smart_open(lda_vw._model_filename, u'wb') as fhandle: fhandle.write(lda_vw._model_data) lda_vw._model_data = None if lda_vw._topics_data: LOG.debug(u"Writing topic bytes to '%s'", lda_vw._topics_filename) with utils.smart_open(lda_vw._topics_filename, u'wb') as fhandle: fhandle.write(lda_vw._topics_data) lda_vw._topics_data = None return lda_vw
'Cleanup the temporary directory used by this wrapper.'
def __del__(self):
if (self.cleanup_files and self.tmp_dir): LOG.debug(u'Recursively deleting: %s', self.tmp_dir) shutil.rmtree(self.tmp_dir)
'Create a working temporary directory with given prefix.'
def _init_temp_dir(self, prefix=u'tmp'):
self.tmp_dir = tempfile.mkdtemp(prefix=prefix) LOG.info(u'using %s as temp dir', self.tmp_dir)
'Get list of command line arguments for running prediction.'
def _get_vw_predict_command(self, corpus_size):
cmd = [self.vw_path, u'--testonly', u'--lda_D', str(corpus_size), u'-i', self._model_filename, u'-d', self._corpus_filename, u'--learning_rate', u'0', u'-p', self._predict_filename] if (self.random_seed is not None): cmd.extend([u'--random_seed', str(self.random_seed)]) return cmd
'Get list of command line arguments for running model training. If \'update\' is set to True, this specifies that we\'re further training an existing model.'
def _get_vw_train_command(self, corpus_size, update=False):
cmd = [self.vw_path, u'-d', self._corpus_filename, u'--power_t', str(self.decay), u'--initial_t', str(self.offset), u'--minibatch', str(self.chunksize), u'--lda_D', str(corpus_size), u'--passes', str(self.passes), u'--cache_file', self._cache_filename, u'--lda_epsilon', str(self.gamma_threshold), u'--readable_model', self._topics_filename, u'-k', u'-f', self._model_filename] if update: cmd.extend([u'-i', self._model_filename]) else: cmd.extend([u'--lda', str(self.num_topics), u'-b', str(_bit_length(self.num_terms)), u'--lda_alpha', str(self.alpha), u'--lda_rho', str(self.eta)]) if (self.random_seed is not None): cmd.extend([u'--random_seed', str(self.random_seed)]) return cmd
'Get list of command line arguments to update a model.'
def _get_vw_update_command(self, corpus_size):
return self._get_vw_train_command(corpus_size, update=True)
'Read topics file generated by Vowpal Wabbit, convert to numpy array. Output consists of many header lines, followed by a number of lines of: <word_id> <topic_1_gamma> <topic_2_gamma> ...'
def _load_vw_topics(self):
topics = numpy.zeros((self.num_topics, self.num_terms), dtype=numpy.float32) with utils.smart_open(self._topics_filename) as topics_file: found_data = False for line in topics_file: if (not found_data): if (line.startswith('0 ') and (':' not in line)): found_data = True else: continue fields = line.split() word_id = int(fields[0]) if (word_id >= self.num_terms): break topics[:, word_id] = fields[1:] self._topics = (topics / topics.sum(axis=1, keepdims=True))
'Get topics matrix, load from file if necessary.'
def _get_topics(self):
if (self._topics is None): self._load_vw_topics() return self._topics
'Run given chunk of documents against currently trained model. Returns a tuple of prediction matrix and Vowpal Wabbit data.'
def _predict(self, chunk):
corpus_size = write_corpus_as_vw(chunk, self._corpus_filename) cmd = self._get_vw_predict_command(corpus_size) vw_data = _parse_vw_output(_run_vw_command(cmd)) vw_data[u'corpus_size'] = corpus_size predictions = numpy.zeros((corpus_size, self.num_topics), dtype=numpy.float32) with utils.smart_open(self._predict_filename) as fhandle: for (i, line) in enumerate(fhandle): predictions[i, :] = line.split() predictions = (predictions / predictions.sum(axis=1, keepdims=True)) return (predictions, vw_data)
'Get path to given filename in temp directory.'
def _get_filename(self, name):
return os.path.join(self.tmp_dir, name)
'Get path to file to write Vowpal Wabbit model to.'
@property def _model_filename(self):
return self._get_filename(u'model.vw')
'Get path to file to write Vowpal Wabbit cache to.'
@property def _cache_filename(self):
return self._get_filename(u'cache.vw')
'Get path to file to write Vowpal Wabbit corpus to.'
@property def _corpus_filename(self):
return self._get_filename(u'corpus.vw')
'Get path to file to write Vowpal Wabbit topics to.'
@property def _topics_filename(self):
return self._get_filename(u'topics.vw')
'Get path to file to write Vowpal Wabbit predictions to.'
@property def _predict_filename(self):
return self._get_filename(u'predict.vw')
'The word and context embedding files are generated by wordrank binary and are saved in "out_name" directory which is created inside wordrank directory. The vocab and cooccurence files are generated using glove code available inside the wordrank directory. These files are used by the wordrank binary for training. `wr_path` is the absolute path to the Wordrank directory. `corpus_file` is the filename of the text file to be used for training the Wordrank model. Expects file to contain space-separated tokens in a single line `out_name` is name of the directory which will be created (in wordrank folder) to save embeddings and training data. It will contain following contents: Word Embeddings saved after every dump_period and stored in a file model_word_current\ iter.txt Context Embeddings saved after every dump_period and stored in a file model_context_current\ iter.txt A meta directory which contain: \'vocab.txt\' - vocab words, \'wiki.toy\' - word-word coccurence values, \'meta\' - vocab and coccurence lengths `size` is the dimensionality of the feature vectors. `window` is the number of context words to the left (and to the right, if symmetric = 1). `symmetric` if 0, only use left context words, else use left and right both. `min_count` = ignore all words with total frequency lower than this. `max_vocab_size` upper bound on vocabulary size, i.e. keep the <int> most frequent words. Default is 0 for no limit. `sgd_num` number of SGD taken for each data point. `lrate` is the learning rate (too high diverges, give Nan). `period` is the period of xi variable updates `iter` = number of iterations (epochs) over the corpus. `epsilon` is the power scaling value for weighting function. `dump_period` is the period after which embeddings should be dumped. `reg` is the value of regularization parameter. `alpha` is the alpha parameter of gamma distribution. `beta` is the beta parameter of gamma distribution. `loss` = name of the loss (logistic, hinge). `memory` = soft limit for memory consumption, in GB. `np` number of copies to execute. (mpirun option) `cleanup_files` if True, delete directory and files used by this wrapper, setting to False can be useful for debugging `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes. `ensemble` = 0 (default), use ensemble of word and context vectors'
@classmethod def train(cls, wr_path, corpus_file, out_name, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0, sgd_num=100, lrate=0.001, period=10, iter=90, epsilon=0.75, dump_period=10, reg=0, alpha=100, beta=99, loss='hinge', memory=4.0, np=1, cleanup_files=False, sorted_vocab=1, ensemble=0):
model_dir = os.path.join(wr_path, out_name) meta_dir = os.path.join(model_dir, 'meta') os.makedirs(meta_dir) logger.info("Dumped data will be stored in '%s'", model_dir) copyfile(corpus_file, os.path.join(meta_dir, corpus_file.split('/')[(-1)])) vocab_file = os.path.join(meta_dir, 'vocab.txt') temp_vocab_file = os.path.join(meta_dir, 'tempvocab.txt') cooccurrence_file = os.path.join(meta_dir, 'cooccurrence') cooccurrence_shuf_file = os.path.join(meta_dir, 'wiki.toy') meta_file = os.path.join(meta_dir, 'meta') cmd_vocab_count = [os.path.join(wr_path, 'glove', 'vocab_count'), '-min-count', str(min_count), '-max-vocab', str(max_vocab_size)] cmd_cooccurence_count = [os.path.join(wr_path, 'glove', 'cooccur'), '-memory', str(memory), '-vocab-file', temp_vocab_file, '-window-size', str(window), '-symmetric', str(symmetric)] cmd_shuffle_cooccurences = [os.path.join(wr_path, 'glove', 'shuffle'), '-memory', str(memory)] cmd_del_vocab_freq = ['cut', '-d', ' ', '-f', '1', temp_vocab_file] commands = [cmd_vocab_count, cmd_cooccurence_count, cmd_shuffle_cooccurences] input_fnames = [os.path.join(meta_dir, os.path.split(corpus_file)[(-1)]), os.path.join(meta_dir, os.path.split(corpus_file)[(-1)]), cooccurrence_file] output_fnames = [temp_vocab_file, cooccurrence_file, cooccurrence_shuf_file] logger.info('Prepare training data (%s) using glove code', ', '.join(input_fnames)) for (command, input_fname, output_fname) in zip(commands, input_fnames, output_fnames): with smart_open(input_fname, 'rb') as r: with smart_open(output_fname, 'wb') as w: utils.check_output(w, args=command, stdin=r) logger.info('Deleting frequencies from vocab file') with smart_open(vocab_file, 'wb') as w: utils.check_output(w, args=cmd_del_vocab_freq) with smart_open(vocab_file, 'rb') as f: numwords = sum((1 for line in f)) with smart_open(cooccurrence_shuf_file, 'rb') as f: numlines = sum((1 for line in f)) with smart_open(meta_file, 'wb') as f: meta_info = '{0} {1}\n{2} {3}\n{4} {5}'.format(numwords, numwords, numlines, cooccurrence_shuf_file.split('/')[(-1)], numwords, vocab_file.split('/')[(-1)]) f.write(meta_info.encode('utf-8')) if ((iter % dump_period) == 0): iter += 1 else: logger.warning('Resultant embedding will be from %d iterations rather than the input %d iterations, as wordrank dumps the embedding only at dump_period intervals. Input an appropriate combination of parameters (iter, dump_period) such that "iter mod dump_period" is zero.', (iter - (iter % dump_period)), iter) wr_args = {'path': meta_dir, 'nthread': multiprocessing.cpu_count(), 'sgd_num': sgd_num, 'lrate': lrate, 'period': period, 'iter': iter, 'epsilon': epsilon, 'dump_prefix': 'model', 'dump_period': dump_period, 'dim': size, 'reg': reg, 'alpha': alpha, 'beta': beta, 'loss': loss} cmd = ['mpirun', '-np'] cmd.append(str(np)) cmd.append(os.path.join(wr_path, 'wordrank')) for (option, value) in wr_args.items(): cmd.append(('--%s' % option)) cmd.append(str(value)) logger.info('Running wordrank binary') output = utils.check_output(args=cmd) max_iter_dump = (iter - (iter % dump_period)) os.rename(('model_word_%d.txt' % max_iter_dump), os.path.join(model_dir, 'wordrank.words')) os.rename(('model_context_%d.txt' % max_iter_dump), os.path.join(model_dir, 'wordrank.contexts')) model = cls.load_wordrank_model(os.path.join(model_dir, 'wordrank.words'), vocab_file, os.path.join(model_dir, 'wordrank.contexts'), sorted_vocab, ensemble) if cleanup_files: rmtree(model_dir) return model
'Sort embeddings according to word frequency.'
def sort_embeddings(self, vocab_file):
counts = {} vocab_size = len(self.vocab) prev_syn0 = copy.deepcopy(self.syn0) prev_vocab = copy.deepcopy(self.vocab) self.index2word = [] with utils.smart_open(vocab_file) as fin: for (index, line) in enumerate(fin): (word, count) = (utils.to_unicode(line).strip(), (vocab_size - index)) counts[word] = int(count) self.index2word.append(word) assert (len(self.index2word) == vocab_size), 'mismatch between vocab sizes' for (word_id, word) in enumerate(self.index2word): self.syn0[word_id] = prev_syn0[prev_vocab[word].index] self.vocab[word].index = word_id self.vocab[word].count = counts[word]
'Replace syn0 with the sum of context and word embeddings.'
def ensemble_embedding(self, word_embedding, context_embedding):
glove2word2vec(context_embedding, (context_embedding + '.w2vformat')) w_emb = KeyedVectors.load_word2vec_format(('%s.w2vformat' % word_embedding)) c_emb = KeyedVectors.load_word2vec_format(('%s.w2vformat' % context_embedding)) assert (set(w_emb.vocab) == set(c_emb.vocab)), 'Vocabs are not same for both embeddings' prev_c_emb = copy.deepcopy(c_emb.syn0) for (word_id, word) in enumerate(w_emb.index2word): c_emb.syn0[word_id] = prev_c_emb[c_emb.vocab[word].index] new_emb = (w_emb.syn0 + c_emb.syn0) self.syn0 = new_emb return new_emb
'Accept a single word as input. Returns the word\'s representations in vector space, as a 1D numpy array. The word can be out-of-vocabulary as long as ngrams for the word are present. For words with all ngrams absent, a KeyError is raised. Example:: >>> trained_model[\'office\'] array([ -1.40128313e-02, ...])'
def word_vec(self, word, use_norm=False):
if (word in self.vocab): return super(FastTextKeyedVectors, self).word_vec(word, use_norm) else: word_vec = np.zeros(self.syn0_all.shape[1]) ngrams = FastText.compute_ngrams(word, self.min_n, self.max_n) ngrams = [ng for ng in ngrams if (ng in self.ngrams)] if use_norm: ngram_weights = self.syn0_all_norm else: ngram_weights = self.syn0_all for ngram in ngrams: word_vec += ngram_weights[self.ngrams[ngram]] if word_vec.any(): return (word_vec / len(ngrams)) else: raise KeyError(('all ngrams for word %s absent from model' % word))
'Precompute L2-normalized vectors. If `replace` is set, forget the original vectors and only keep the normalized ones = saves lots of memory! Note that you **cannot continue training** after doing a replace. The model becomes effectively read-only = you can only call `most_similar`, `similarity` etc.'
def init_sims(self, replace=False):
super(FastTextKeyedVectors, self).init_sims(replace) if ((getattr(self, 'syn0_all_norm', None) is None) or replace): logger.info('precomputing L2-norms of ngram weight vectors') if replace: for i in xrange(self.syn0_all.shape[0]): self.syn0_all[i, :] /= sqrt((self.syn0_all[i, :] ** 2).sum((-1))) self.syn0_all_norm = self.syn0_all else: self.syn0_all_norm = (self.syn0_all / sqrt((self.syn0_all ** 2).sum((-1)))[..., newaxis]).astype(REAL)
'Check if word is present in the vocabulary, or if any word ngrams are present. A vector for the word is guaranteed to exist if `__contains__` returns True.'
def __contains__(self, word):
if (word in self.vocab): return True else: word_ngrams = set(FastText.compute_ngrams(word, self.min_n, self.max_n)) if len((word_ngrams & set(self.ngrams.keys()))): return True else: return False
'`ft_path` is the path to the FastText executable, e.g. `/home/kofola/fastText/fasttext`. `corpus_file` is the filename of the text file to be used for training the FastText model. Expects file to contain utf-8 encoded text. `model` defines the training algorithm. By default, cbow is used. Accepted values are \'cbow\', \'skipgram\'. `size` is the dimensionality of the feature vectors. `window` is the maximum distance between the current and predicted word within a sentence. `alpha` is the initial learning rate. `min_count` = ignore all words with total occurrences lower than this. `word_ngram` = max length of word ngram `loss` = defines training objective. Allowed values are `hs` (hierarchical softmax), `ns` (negative sampling) and `softmax`. Defaults to `ns` `sample` = threshold for configuring which higher-frequency words are randomly downsampled; default is 1e-3, useful range is (0, 1e-5). `negative` = the value for negative specifies how many "noise words" should be drawn (usually between 5-20). Default is 5. If set to 0, no negative samping is used. Only relevant when `loss` is set to `ns` `iter` = number of iterations (epochs) over the corpus. Default is 5. `min_n` = min length of char ngrams to be used for training word representations. Default is 3. `max_n` = max length of char ngrams to be used for training word representations. Set `max_n` to be lesser than `min_n` to avoid char ngrams being used. Default is 6. `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes. `threads` = number of threads to use. Default is 12.'
@classmethod def train(cls, ft_path, corpus_file, output_file=None, model='cbow', size=100, alpha=0.025, window=5, min_count=5, word_ngrams=1, loss='ns', sample=0.001, negative=5, iter=5, min_n=3, max_n=6, sorted_vocab=1, threads=12):
ft_path = ft_path output_file = (output_file or os.path.join(tempfile.gettempdir(), 'ft_model')) ft_args = {'input': corpus_file, 'output': output_file, 'lr': alpha, 'dim': size, 'ws': window, 'epoch': iter, 'minCount': min_count, 'wordNgrams': word_ngrams, 'neg': negative, 'loss': loss, 'minn': min_n, 'maxn': max_n, 'thread': threads, 't': sample} cmd = [ft_path, model] for (option, value) in ft_args.items(): cmd.append(('-%s' % option)) cmd.append(str(value)) output = utils.check_output(args=cmd) model = cls.load_fasttext_format(output_file) cls.delete_training_files(output_file) return model
'Load the input-hidden weight matrix from the fast text output files. Note that due to limitations in the FastText API, you cannot continue training with a model loaded this way, though you can query for word similarity etc. `model_file` is the path to the FastText output files. FastText outputs two model files - `/path/to/model.vec` and `/path/to/model.bin` Expected value for this example: `/path/to/model` or `/path/to/model.bin`, as gensim requires only `.bin` file to load entire fastText model.'
@classmethod def load_fasttext_format(cls, model_file, encoding='utf8'):
model = cls() if (not model_file.endswith('.bin')): model_file += '.bin' model.file_name = model_file model.load_binary_data(encoding=encoding) return model
'Deletes the files created by FastText training'
@classmethod def delete_training_files(cls, model_file):
try: os.remove(('%s.vec' % model_file)) os.remove(('%s.bin' % model_file)) except FileNotFoundError: logger.debug('Training files %s not found when attempting to delete', model_file) pass
'Loads data from the output binary file created by FastText training'
def load_binary_data(self, encoding='utf8'):
with utils.smart_open(self.file_name, 'rb') as f: self.load_model_params(f) self.load_dict(f, encoding=encoding) self.load_vectors(f)
'Computes ngrams of all words present in vocabulary and stores vectors for only those ngrams. Vectors for other ngrams are initialized with a random uniform distribution in FastText. These vectors are discarded here to save space.'
def init_ngrams(self):
self.wv.ngrams = {} all_ngrams = [] self.wv.syn0 = np.zeros((len(self.wv.vocab), self.vector_size), dtype=REAL) for (w, vocab) in self.wv.vocab.items(): all_ngrams += self.compute_ngrams(w, self.wv.min_n, self.wv.max_n) self.wv.syn0[vocab.index] += np.array(self.wv.syn0_all[vocab.index]) all_ngrams = set(all_ngrams) self.num_ngram_vectors = len(all_ngrams) ngram_indices = [] for (i, ngram) in enumerate(all_ngrams): ngram_hash = self.ft_hash(ngram) ngram_indices.append((len(self.wv.vocab) + (ngram_hash % self.bucket))) self.wv.ngrams[ngram] = i self.wv.syn0_all = self.wv.syn0_all.take(ngram_indices, axis=0) ngram_weights = self.wv.syn0_all logger.info('loading weights for %s words for fastText model from %s', len(self.wv.vocab), self.file_name) for (w, vocab) in self.wv.vocab.items(): word_ngrams = self.compute_ngrams(w, self.wv.min_n, self.wv.max_n) for word_ngram in word_ngrams: self.wv.syn0[vocab.index] += np.array(ngram_weights[self.wv.ngrams[word_ngram]]) self.wv.syn0[vocab.index] /= (len(word_ngrams) + 1) logger.info('loaded %s weight matrix for fastText model from %s', self.wv.syn0.shape, self.file_name)
'Reproduces [hash method](https://github.com/facebookresearch/fastText/blob/master/src/dictionary.cc) used in fastText.'
@staticmethod def ft_hash(string):
old_settings = np.seterr(all='ignore') h = np.uint32(2166136261) for c in string: h = (h ^ np.uint32(ord(c))) h = (h * np.uint32(16777619)) np.seterr(**old_settings) return h
'Load the word vectors into matrix from the varembed output vector files. Using morphemes requires Python 2.7 version or above. \'vectors\' is the pickle file containing the word vectors. \'morfessor_model\' is the path to the trained morfessor model. \'use_morphemes\' False(default) use of morpheme embeddings in output.'
@classmethod def load_varembed_format(cls, vectors, morfessor_model=None):
result = cls() if (vectors is None): raise Exception('Please provide vectors binary to load varembed model') D = utils.unpickle(vectors) word_to_ix = D['word_to_ix'] morpho_to_ix = D['morpho_to_ix'] word_embeddings = D['word_embeddings'] morpho_embeddings = D['morpheme_embeddings'] result.load_word_embeddings(word_embeddings, word_to_ix) if morfessor_model: if (sys.version_info >= (2, 7)): try: import morfessor morfessor_model = morfessor.MorfessorIO().read_binary_model_file(morfessor_model) result.add_morphemes_to_embeddings(morfessor_model, morpho_embeddings, morpho_to_ix) except ImportError: logger.error('Could not import morfessor. Not using morpheme embeddings') raise ImportError('Could not import morfessor.') else: raise Exception('Using Morphemes requires Python 2.7 and above. Morfessor is not supported in python 2.6') logger.info('Loaded varembed model vectors from %s', vectors) return result
'Loads the word embeddings'
def load_word_embeddings(self, word_embeddings, word_to_ix):
logger.info('Loading the vocabulary') self.vocab = {} self.index2word = [] counts = {} for word in word_to_ix: counts[word] = (counts.get(word, 0) + 1) self.vocab_size = len(counts) self.vector_size = word_embeddings.shape[1] self.syn0 = np.zeros((self.vocab_size, self.vector_size)) self.index2word = ([None] * self.vocab_size) logger.info('Corpus has %i words', len(self.vocab)) for (word_id, word) in enumerate(counts): self.vocab[word] = Vocab(index=word_id, count=counts[word]) self.syn0[word_id] = word_embeddings[word_to_ix[word]] self.index2word[word_id] = word assert ((len(self.vocab), self.vector_size) == self.syn0.shape) logger.info('Loaded matrix of %d size and %d dimensions', self.vocab_size, self.vector_size)
'Method to include morpheme embeddings into varembed vectors Allowed only in Python versions 2.7 and above.'
def add_morphemes_to_embeddings(self, morfessor_model, morpho_embeddings, morpho_to_ix):
for word in self.vocab: morpheme_embedding = np.array([morpho_embeddings[morpho_to_ix.get(m, (-1))] for m in morfessor_model.viterbi_segment(word)[0]]).sum(axis=0) self.syn0[self.vocab[word].index] += morpheme_embedding logger.info('Added morphemes to word vectors')
'`dtm_path` is path to the dtm executable, e.g. `C:/dtm/dtm-win64.exe`. `corpus` is a gensim corpus, aka a stream of sparse document vectors. `id2word` is a mapping between tokens ids and token. `mode` controls the mode of the mode: \'fit\' is for training, \'time\' for analyzing documents through time according to a DTM, basically a held out set. `model` controls the choice of model. \'fixed\' is for DIM and \'dtm\' for DTM. `lda_sequence_min_iter` min iteration of LDA. `lda_sequence_max_iter` max iteration of LDA. `lda_max_em_iter` max em optiimzatiion iterations in LDA. `alpha` is a hyperparameter that affects sparsity of the document-topics for the LDA models in each timeslice. `top_chain_var` is a hyperparameter that affects. `rng_seed` is the random seed. `initialize_lda` initialize DTM with LDA.'
def __init__(self, dtm_path, corpus=None, time_slices=None, mode='fit', model='dtm', num_topics=100, id2word=None, prefix=None, lda_sequence_min_iter=6, lda_sequence_max_iter=20, lda_max_em_iter=10, alpha=0.01, top_chain_var=0.005, rng_seed=0, initialize_lda=True):
if (not os.path.isfile(dtm_path)): raise ValueError('dtm_path must point to the binary file, not to a folder') self.dtm_path = dtm_path self.id2word = id2word if (self.id2word is None): logger.warning('no word id mapping provided; initializing from corpus, assuming identity') self.id2word = utils.dict_from_corpus(corpus) self.num_terms = len(self.id2word) else: self.num_terms = (0 if (not self.id2word) else (1 + max(self.id2word.keys()))) if (self.num_terms == 0): raise ValueError('cannot compute DTM over an empty collection (no terms)') self.num_topics = num_topics try: lencorpus = len(corpus) except: logger.warning('input corpus stream has no len(); counting documents') lencorpus = sum((1 for _ in corpus)) if (lencorpus == 0): raise ValueError('cannot compute DTM over an empty corpus') if ((model == 'fixed') and any(((not text) for text in corpus))): raise ValueError("There is a text without words in the input corpus.\n This breaks method='fixed' (The DIM model).") if (lencorpus != sum(time_slices)): raise ValueError('mismatched timeslices %{slices} for corpus of len {clen}'.format(slices=sum(time_slices), clen=lencorpus)) self.lencorpus = lencorpus if (prefix is None): rand_prefix = (hex(random.randint(0, 16777215))[2:] + '_') prefix = os.path.join(tempfile.gettempdir(), rand_prefix) self.prefix = prefix self.time_slices = time_slices self.lda_sequence_min_iter = int(lda_sequence_min_iter) self.lda_sequence_max_iter = int(lda_sequence_max_iter) self.lda_max_em_iter = int(lda_max_em_iter) self.alpha = alpha self.top_chain_var = top_chain_var self.rng_seed = rng_seed self.initialize_lda = str(initialize_lda).lower() self.lambda_ = None self.obs_ = None self.lhood_ = None self.gamma_ = None self.init_alpha = None self.init_beta = None self.init_ss = None self.em_steps = [] self.influences_time = [] if (corpus is not None): self.train(corpus, time_slices, mode, model)
'Serialize documents in LDA-C format to a temporary text file,.'
def convert_input(self, corpus, time_slices):
logger.info(('serializing temporary corpus to %s' % self.fcorpustxt())) corpora.BleiCorpus.save_corpus(self.fcorpustxt(), corpus) with utils.smart_open(self.ftimeslices(), 'wb') as fout: fout.write(utils.to_utf8((str(len(self.time_slices)) + '\n'))) for sl in time_slices: fout.write(utils.to_utf8((str(sl) + '\n')))
'Train DTM model using specified corpus and time slices.'
def train(self, corpus, time_slices, mode, model):
self.convert_input(corpus, time_slices) arguments = '--ntopics={p0} --model={mofrl} --mode={p1} --initialize_lda={p2} --corpus_prefix={p3} --outname={p4} --alpha={p5}'.format(p0=self.num_topics, mofrl=model, p1=mode, p2=self.initialize_lda, p3=self.fcorpus(), p4=self.foutname(), p5=self.alpha) params = '--lda_max_em_iter={p0} --lda_sequence_min_iter={p1} --lda_sequence_max_iter={p2} --top_chain_var={p3} --rng_seed={p4} '.format(p0=self.lda_max_em_iter, p1=self.lda_sequence_min_iter, p2=self.lda_sequence_max_iter, p3=self.top_chain_var, p4=self.rng_seed) arguments = ((arguments + ' ') + params) logger.info(('training DTM with args %s' % arguments)) cmd = ([self.dtm_path] + arguments.split()) logger.info(('Running command %s' % cmd)) check_output(args=cmd, stderr=PIPE) self.em_steps = np.loadtxt(self.fem_steps()) self.init_ss = np.loadtxt(self.flda_ss()) if self.initialize_lda: self.init_alpha = np.loadtxt(self.finit_alpha()) self.init_beta = np.loadtxt(self.finit_beta()) self.lhood_ = np.loadtxt(self.fout_liklihoods()) self.gamma_ = np.loadtxt(self.fout_gamma()) self.gamma_.shape = (self.lencorpus, self.num_topics) self.gamma_ /= self.gamma_.sum(axis=1)[:, np.newaxis] self.lambda_ = np.zeros((self.num_topics, (self.num_terms * len(self.time_slices)))) self.obs_ = np.zeros((self.num_topics, (self.num_terms * len(self.time_slices)))) for t in range(self.num_topics): topic = ('%03d' % t) self.lambda_[t, :] = np.loadtxt(self.fout_prob().format(i=topic)) self.obs_[t, :] = np.loadtxt(self.fout_observations().format(i=topic)) self.lambda_.shape = (self.num_topics, self.num_terms, len(self.time_slices)) self.obs_.shape = (self.num_topics, self.num_terms, len(self.time_slices)) if (model == 'fixed'): for (k, t) in enumerate(self.time_slices): stamp = ('%03d' % k) influence = np.loadtxt(self.fout_influence().format(i=stamp)) influence.shape = (t, self.num_topics) self.influences_time.append(influence)
'Print the `num_words` most probable words for `num_topics` number of topics at \'times\' time slices. Set `topics=-1` to print all topics. Set `formatted=True` to return the topics as a list of strings, or `False` as lists of (weight, word) pairs.'
def show_topics(self, num_topics=10, times=5, num_words=10, log=False, formatted=True):
if ((num_topics < 0) or (num_topics >= self.num_topics)): num_topics = self.num_topics chosen_topics = range(num_topics) else: num_topics = min(num_topics, self.num_topics) chosen_topics = range(num_topics) if ((times < 0) or (times >= len(self.time_slices))): times = len(self.time_slices) chosen_times = range(times) else: times = min(times, len(self.time_slices)) chosen_times = range(times) shown = [] for time in chosen_times: for i in chosen_topics: if formatted: topic = self.print_topic(i, time, num_words=num_words) else: topic = self.show_topic(i, time, num_words=num_words) shown.append(topic) return shown
'Return `num_words` most probable words for the given `topicid`, as a list of `(word_probability, word)` 2-tuples.'
def show_topic(self, topicid, time, topn=50, num_words=None):
if (num_words is not None): logger.warning('The parameter num_words for show_topic() would be deprecated in the updated version.') logger.warning('Please use topn instead.') topn = num_words topics = self.lambda_[:, :, time] topic = topics[topicid] topic = np.exp(topic) topic = (topic / topic.sum()) bestn = matutils.argsort(topic, topn, reverse=True) beststr = [(topic[id], self.id2word[id]) for id in bestn] return beststr
'Return the given topic, formatted as a string.'
def print_topic(self, topicid, time, topn=10, num_words=None):
if (num_words is not None): warnings.warn('The parameter num_words for print_topic() would be deprecated in the updated version. Please use topn instead.') topn = num_words return ' + '.join([('%.3f*%s' % v) for v in self.show_topic(topicid, time, topn)])
'returns term_frequency, vocab, doc_lengths, topic-term distributions and doc_topic distributions, specified by pyLDAvis format. all of these are needed to visualise topics for DTM for a particular time-slice via pyLDAvis. input parameter is the year to do the visualisation.'
def dtm_vis(self, corpus, time):
topic_term = (np.exp(self.lambda_[:, :, time]) / np.exp(self.lambda_[:, :, time]).sum()) topic_term = (topic_term * self.num_topics) doc_topic = self.gamma_ doc_lengths = [len(doc) for (doc_no, doc) in enumerate(corpus)] term_frequency = np.zeros(len(self.id2word)) for (doc_no, doc) in enumerate(corpus): for pair in doc: term_frequency[pair[0]] += pair[1] vocab = [self.id2word[i] for i in range(0, len(self.id2word))] return (doc_topic, topic_term, doc_lengths, term_frequency, vocab)
'returns all topics of a particular time-slice without probabilitiy values for it to be used for either "u_mass" or "c_v" coherence. TODO: because of print format right now can only return for 1st time-slice. should we fix the coherence printing or make changes to the print statements to mirror DTM python?'
def dtm_coherence(self, time, num_words=20):
coherence_topics = [] for topic_no in range(0, self.num_topics): topic = self.show_topic(topicid=topic_no, time=time, num_words=num_words) coherence_topic = [] for (prob, word) in topic: coherence_topic.append(word) coherence_topics.append(coherence_topic) return coherence_topics
'`mallet_path` is path to the mallet executable, e.g. `/home/kofola/mallet-2.0.7/bin/mallet`. `corpus` is a gensim corpus, aka a stream of sparse document vectors. `id2word` is a mapping between tokens ids and token. `workers` is the number of threads, for parallel training. `prefix` is the string prefix under which all data files will be stored; default: system temp + random filename prefix. `optimize_interval` optimize hyperparameters every N iterations (sometimes leads to Java exception; 0 to switch off hyperparameter optimization). `iterations` is the number of sampling iterations. `topic_threshold` is the threshold of the probability above which we consider a topic. This is basically for sparse topic distribution.'
def __init__(self, mallet_path, corpus=None, num_topics=100, alpha=50, id2word=None, workers=4, prefix=None, optimize_interval=0, iterations=1000, topic_threshold=0.0):
self.mallet_path = mallet_path self.id2word = id2word if (self.id2word is None): logger.warning('no word id mapping provided; initializing from corpus, assuming identity') self.id2word = utils.dict_from_corpus(corpus) self.num_terms = len(self.id2word) else: self.num_terms = (0 if (not self.id2word) else (1 + max(self.id2word.keys()))) if (self.num_terms == 0): raise ValueError('cannot compute LDA over an empty collection (no terms)') self.num_topics = num_topics self.topic_threshold = topic_threshold self.alpha = alpha if (prefix is None): rand_prefix = (hex(random.randint(0, 16777215))[2:] + '_') prefix = os.path.join(tempfile.gettempdir(), rand_prefix) self.prefix = prefix self.workers = workers self.optimize_interval = optimize_interval self.iterations = iterations if (corpus is not None): self.train(corpus)
'Write out `corpus` in a file format that MALLET understands: one document per line: document id[SPACE]label (not used)[SPACE]whitespace delimited utf8-encoded tokens[NEWLINE]'
def corpus2mallet(self, corpus, file_like):
for (docno, doc) in enumerate(corpus): if self.id2word: tokens = sum((([self.id2word[tokenid]] * int(cnt)) for (tokenid, cnt) in doc), []) else: tokens = sum((([str(tokenid)] * int(cnt)) for (tokenid, cnt) in doc), []) file_like.write(utils.to_utf8(('%s 0 %s\n' % (docno, ' '.join(tokens)))))
'Serialize documents (lists of unicode tokens) to a temporary text file, then convert that text file to MALLET format `outfile`.'
def convert_input(self, corpus, infer=False, serialize_corpus=True):
if serialize_corpus: logger.info('serializing temporary corpus to %s', self.fcorpustxt()) with smart_open(self.fcorpustxt(), 'wb') as fout: self.corpus2mallet(corpus, fout) cmd = (self.mallet_path + ' import-file --preserve-case --keep-sequence --remove-stopwords --token-regex "\\S+" --input %s --output %s') if infer: cmd += (' --use-pipe-from ' + self.fcorpusmallet()) cmd = (cmd % (self.fcorpustxt(), (self.fcorpusmallet() + '.infer'))) else: cmd = (cmd % (self.fcorpustxt(), self.fcorpusmallet())) logger.info('converting temporary corpus to MALLET format with %s', cmd) check_output(args=cmd, shell=True)
'Return an iterator over the topic distribution of training corpus, by reading the doctopics.txt generated during training.'
def load_document_topics(self):
return self.read_doctopics(self.fdoctopics())
'Print the `num_words` most probable words for `num_topics` number of topics. Set `num_topics=-1` to print all topics. Set `formatted=True` to return the topics as a list of strings, or `False` as lists of (weight, word) pairs.'
def show_topics(self, num_topics=10, num_words=10, log=False, formatted=True):
if ((num_topics < 0) or (num_topics >= self.num_topics)): num_topics = self.num_topics chosen_topics = range(num_topics) else: num_topics = min(num_topics, self.num_topics) sort_alpha = (self.alpha + (0.0001 * numpy.random.rand(len(self.alpha)))) sorted_topics = list(matutils.argsort(sort_alpha)) chosen_topics = (sorted_topics[:(num_topics // 2)] + sorted_topics[((- num_topics) // 2):]) shown = [] for i in chosen_topics: if formatted: topic = self.print_topic(i, topn=num_words) else: topic = self.show_topic(i, num_words=num_words) shown.append((i, topic)) if log: logger.info('topic #%i (%.3f): %s', i, self.alpha[i], topic) return shown
'function to return the version of `mallet`'
def get_version(self, direc_path):
try: '\n Check version of mallet via jar file\n ' archive = zipfile.ZipFile(direc_path, 'r') if (u'cc/mallet/regression/' not in archive.namelist()): return '2.0.7' else: return '2.0.8RC3' except Exception: xml_path = direc_path.split('bin')[0] try: doc = et.parse((xml_path + 'pom.xml')).getroot() namespace = doc.tag[:(doc.tag.index('}') + 1)] return doc.find((namespace + 'version')).text.split('-')[0] except Exception: return "Can't parse pom.xml version file"
'Yield document topic vectors from MALLET\'s "doc-topics" format, as sparse gensim vectors.'
def read_doctopics(self, fname, eps=1e-06, renorm=True):
mallet_version = self.get_version(self.mallet_path) with utils.smart_open(fname) as fin: for (lineno, line) in enumerate(fin): if ((lineno == 0) and line.startswith('#doc ')): continue parts = line.split()[2:] if (len(parts) == (2 * self.num_topics)): doc = [(id_, weight) for (id_, weight) in zip(map(int, parts[::2]), map(float, parts[1::2])) if (abs(weight) > eps)] elif ((len(parts) == self.num_topics) and (mallet_version != '2.0.7')): doc = [(id_, weight) for (id_, weight) in enumerate(map(float, parts)) if (abs(weight) > eps)] elif (mallet_version == '2.0.7'): '\n\n 1 1 0 1.0780612802674239 30.005575655428533364 2 0.005575655428533364 1 0.005575655428533364\n 2 2 0 0.9184413079632608 40.009062076892971008 3 0.009062076892971008 2 0.009062076892971008 1 0.009062076892971008\n In the above example there is a mix of the above if and elif statement. There are neither `2*num_topics` nor `num_topics` elements.\n It has 2 formats 40.009062076892971008 and 0 1.0780612802674239 which cannot be handled by above if elif.\n Also, there are some topics are missing(meaning that the topic is not there) which is another reason why the above if elif\n fails even when the `mallet` produces the right results\n\n ' count = 0 doc = [] if (len(parts) > 0): while (count < len(parts)): '\n if section is to deal with formats of type 2 0.034\n so if count reaches index of 2 and since int(2) == float(2) so if block is executed\n now there is one extra element afer 2, so count + 1 access should not give an error\n\n else section handles formats of type 20.034\n now count is there on index of 20.034 since float(20.034) != int(20.034) so else block\n is executed\n\n ' if (float(parts[count]) == int(parts[count])): if (float(parts[(count + 1)]) > eps): doc.append((int(parts[count]), float(parts[(count + 1)]))) count += 2 else: if ((float(parts[count]) - int(parts[count])) > eps): doc.append(((int(parts[count]) % 10), (float(parts[count]) - int(parts[count])))) count += 1 else: raise RuntimeError(('invalid doc topics format at line %i in %s' % ((lineno + 1), fname))) if renorm: total_weight = float(sum([weight for (_, weight) in doc])) if total_weight: doc = [(id_, (float(weight) / total_weight)) for (id_, weight) in doc] (yield doc)
'`id2word` is a mapping from word ids (integers) to words (strings). It is used to determine the vocabulary size, as well as for debugging and topic printing. If not set, it will be determined from the corpus.'
def __init__(self, corpus, id2word=None, num_topics=300):
self.id2word = id2word self.num_topics = num_topics if (corpus is not None): self.initialize(corpus)
'Initialize the random projection matrix.'
def initialize(self, corpus):
if (self.id2word is None): logger.info('no word id mapping provided; initializing from corpus, assuming identity') self.id2word = utils.dict_from_corpus(corpus) self.num_terms = len(self.id2word) else: self.num_terms = (1 + max(([(-1)] + self.id2word.keys()))) shape = (self.num_topics, self.num_terms) logger.info(('constructing %s random matrix' % str(shape))) randmat = (1 - (2 * np.random.binomial(1, 0.5, shape))) self.projection = np.asfortranarray(randmat, dtype=np.float32)
'Return RP representation of the input vector and/or corpus.'
def __getitem__(self, bow):
(is_corpus, bow) = utils.is_corpus(bow) if is_corpus: return self._apply(bow) if getattr(self, 'freshly_loaded', False): self.freshly_loaded = False self.projection = self.projection.copy('F') vec = (matutils.sparse2full(bow, self.num_terms).reshape(self.num_terms, 1) / np.sqrt(self.num_topics)) vec = np.asfortranarray(vec, dtype=np.float32) topic_dist = np.dot(self.projection, vec) return [(topicid, float(topicvalue)) for (topicid, topicvalue) in enumerate(topic_dist.flat) if (np.isfinite(topicvalue) and (not np.allclose(topicvalue, 0.0)))]
'Initialize the model from an iterable of `sentences`. Each sentence is a list of words (unicode strings) that will be used for training. The `sentences` iterable can be simply a list, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See :class:`BrownCorpus`, :class:`Text8Corpus` or :class:`LineSentence` in this module for such examples. If you don\'t supply `sentences`, the model is left uninitialized -- use if you plan to initialize it in some other way. `sg` defines the training algorithm. By default (`sg=0`), CBOW is used. Otherwise (`sg=1`), skip-gram is employed. `size` is the dimensionality of the feature vectors. `window` is the maximum distance between the current and predicted word within a sentence. `alpha` is the initial learning rate (will linearly drop to `min_alpha` as training progresses). `seed` = for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed). Note that for a fully deterministically-reproducible run, you must also limit the model to a single worker thread, to eliminate ordering jitter from OS thread scheduling. (In Python 3, reproducibility between interpreter launches also requires use of the PYTHONHASHSEED environment variable to control hash randomization.) `min_count` = ignore all words with total frequency lower than this. `max_vocab_size` = limit RAM during vocabulary building; if there are more unique words than this, then prune the infrequent ones. Every 10 million word types need about 1GB of RAM. Set to `None` for no limit (default). `sample` = threshold for configuring which higher-frequency words are randomly downsampled; default is 1e-3, useful range is (0, 1e-5). `workers` = use this many worker threads to train the model (=faster training with multicore machines). `hs` = if 1, hierarchical softmax will be used for model training. If set to 0 (default), and `negative` is non-zero, negative sampling will be used. `negative` = if > 0, negative sampling will be used, the int for negative specifies how many "noise words" should be drawn (usually between 5-20). Default is 5. If set to 0, no negative samping is used. `cbow_mean` = if 0, use the sum of the context word vectors. If 1 (default), use the mean. Only applies when cbow is used. `hashfxn` = hash function to use to randomly initialize weights, for increased training reproducibility. Default is Python\'s rudimentary built in hash function. `iter` = number of iterations (epochs) over the corpus. Default is 5. `trim_rule` = vocabulary trimming rule, specifies whether certain words should remain in the vocabulary, be trimmed away, or handled using the default (discard if word count < min_count). Can be None (min_count will be used), or a callable that accepts parameters (word, count, min_count) and returns either `utils.RULE_DISCARD`, `utils.RULE_KEEP` or `utils.RULE_DEFAULT`. Note: The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model. `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes. `batch_words` = target size (in words) for batches of examples passed to worker threads (and thus cython routines). Default is 10000. (Larger batches will be passed if individual texts are longer than 10000 words, but the standard cython code truncates to that maximum.)'
def __init__(self, sentences=None, size=100, alpha=0.025, window=5, min_count=5, max_vocab_size=None, sample=0.001, seed=1, workers=3, min_alpha=0.0001, sg=0, hs=0, negative=5, cbow_mean=1, hashfxn=hash, iter=5, null_word=0, trim_rule=None, sorted_vocab=1, batch_words=MAX_WORDS_IN_BATCH, compute_loss=False):
self.load = call_on_class_only if (FAST_VERSION == (-1)): logger.warning('Slow version of {0} is being used'.format(__name__)) else: logger.debug('Fast version of {0} is being used'.format(__name__)) self.initialize_word_vectors() self.sg = int(sg) self.cum_table = None self.vector_size = int(size) self.layer1_size = int(size) if ((size % 4) != 0): logger.warning('consider setting layer size to a multiple of 4 for greater performance') self.alpha = float(alpha) self.min_alpha_yet_reached = float(alpha) self.window = int(window) self.max_vocab_size = max_vocab_size self.seed = seed self.random = random.RandomState(seed) self.min_count = min_count self.sample = sample self.workers = int(workers) self.min_alpha = float(min_alpha) self.hs = hs self.negative = negative self.cbow_mean = int(cbow_mean) self.hashfxn = hashfxn self.iter = iter self.null_word = null_word self.train_count = 0 self.total_train_time = 0 self.sorted_vocab = sorted_vocab self.batch_words = batch_words self.model_trimmed_post_training = False self.compute_loss = compute_loss self.running_training_loss = 0 if (sentences is not None): if isinstance(sentences, GeneratorType): raise TypeError("You can't pass a generator as the sentences argument. Try an iterator.") self.build_vocab(sentences, trim_rule=trim_rule) self.train(sentences, total_examples=self.corpus_count, epochs=self.iter, start_alpha=self.alpha, end_alpha=self.min_alpha) elif (trim_rule is not None): logger.warning('The rule, if given, is only used to prune vocabulary during build_vocab() and is not stored as part of the model. ') logger.warning('Model initialized without sentences. trim_rule provided, if any, will be ignored.')
'Create a cumulative-distribution table using stored vocabulary word counts for drawing random words in the negative-sampling training routines. To draw a word index, choose a random integer up to the maximum value in the table (cum_table[-1]), then finding that integer\'s sorted insertion point (as if by bisect_left or ndarray.searchsorted()). That insertion point is the drawn index, coming up in proportion equal to the increment at that slot. Called internally from \'build_vocab()\'.'
def make_cum_table(self, power=0.75, domain=((2 ** 31) - 1)):
vocab_size = len(self.wv.index2word) self.cum_table = zeros(vocab_size, dtype=uint32) train_words_pow = 0.0 for word_index in xrange(vocab_size): train_words_pow += (self.wv.vocab[self.wv.index2word[word_index]].count ** power) cumulative = 0.0 for word_index in xrange(vocab_size): cumulative += (self.wv.vocab[self.wv.index2word[word_index]].count ** power) self.cum_table[word_index] = round(((cumulative / train_words_pow) * domain)) if (len(self.cum_table) > 0): assert (self.cum_table[(-1)] == domain)
'Create a binary Huffman tree using stored vocabulary word counts. Frequent words will have shorter binary codes. Called internally from `build_vocab()`.'
def create_binary_tree(self):
logger.info('constructing a huffman tree from %i words', len(self.wv.vocab)) heap = list(itervalues(self.wv.vocab)) heapq.heapify(heap) for i in xrange((len(self.wv.vocab) - 1)): (min1, min2) = (heapq.heappop(heap), heapq.heappop(heap)) heapq.heappush(heap, Vocab(count=(min1.count + min2.count), index=(i + len(self.wv.vocab)), left=min1, right=min2)) if heap: (max_depth, stack) = (0, [(heap[0], [], [])]) while stack: (node, codes, points) = stack.pop() if (node.index < len(self.wv.vocab)): (node.code, node.point) = (codes, points) max_depth = max(len(codes), max_depth) else: points = array((list(points) + [(node.index - len(self.wv.vocab))]), dtype=uint32) stack.append((node.left, array((list(codes) + [0]), dtype=uint8), points)) stack.append((node.right, array((list(codes) + [1]), dtype=uint8), points)) logger.info('built huffman tree with maximum node depth %i', max_depth)
'Build vocabulary from a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings.'
def build_vocab(self, sentences, keep_raw_vocab=False, trim_rule=None, progress_per=10000, update=False):
self.scan_vocab(sentences, progress_per=progress_per, trim_rule=trim_rule) self.scale_vocab(keep_raw_vocab=keep_raw_vocab, trim_rule=trim_rule, update=update) self.finalize_vocab(update=update)
'Do an initial scan of all words appearing in sentences.'
def scan_vocab(self, sentences, progress_per=10000, trim_rule=None):
logger.info('collecting all words and their counts') sentence_no = (-1) total_words = 0 min_reduce = 1 vocab = defaultdict(int) checked_string_types = 0 for (sentence_no, sentence) in enumerate(sentences): if (not checked_string_types): if isinstance(sentence, string_types): logger.warning("Each 'sentences' item should be a list of words (usually unicode strings).First item here is instead plain %s.", type(sentence)) checked_string_types += 1 if ((sentence_no % progress_per) == 0): logger.info('PROGRESS: at sentence #%i, processed %i words, keeping %i word types', sentence_no, (sum(itervalues(vocab)) + total_words), len(vocab)) for word in sentence: vocab[word] += 1 if (self.max_vocab_size and (len(vocab) > self.max_vocab_size)): total_words += utils.prune_vocab(vocab, min_reduce, trim_rule=trim_rule) min_reduce += 1 total_words += sum(itervalues(vocab)) logger.info('collected %i word types from a corpus of %i raw words and %i sentences', len(vocab), total_words, (sentence_no + 1)) self.corpus_count = (sentence_no + 1) self.raw_vocab = vocab
'Apply vocabulary settings for `min_count` (discarding less-frequent words) and `sample` (controlling the downsampling of more-frequent words). Calling with `dry_run=True` will only simulate the provided settings and report the size of the retained vocabulary, effective corpus length, and estimated memory requirements. Results are both printed via logging and returned as a dict. Delete the raw vocabulary after the scaling is done to free up RAM, unless `keep_raw_vocab` is set.'
def scale_vocab(self, min_count=None, sample=None, dry_run=False, keep_raw_vocab=False, trim_rule=None, update=False):
min_count = (min_count or self.min_count) sample = (sample or self.sample) drop_total = drop_unique = 0 if (not update): logger.info('Loading a fresh vocabulary') (retain_total, retain_words) = (0, []) if (not dry_run): self.wv.index2word = [] self.min_count = min_count self.sample = sample self.wv.vocab = {} for (word, v) in iteritems(self.raw_vocab): if keep_vocab_item(word, v, min_count, trim_rule=trim_rule): retain_words.append(word) retain_total += v if (not dry_run): self.wv.vocab[word] = Vocab(count=v, index=len(self.wv.index2word)) self.wv.index2word.append(word) else: drop_unique += 1 drop_total += v original_unique_total = (len(retain_words) + drop_unique) retain_unique_pct = ((len(retain_words) * 100) / max(original_unique_total, 1)) logger.info('min_count=%d retains %i unique words (%i%% of original %i, drops %i)', min_count, len(retain_words), retain_unique_pct, original_unique_total, drop_unique) original_total = (retain_total + drop_total) retain_pct = ((retain_total * 100) / max(original_total, 1)) logger.info('min_count=%d leaves %i word corpus (%i%% of original %i, drops %i)', min_count, retain_total, retain_pct, original_total, drop_total) else: logger.info('Updating model with new vocabulary') new_total = pre_exist_total = 0 new_words = pre_exist_words = [] for (word, v) in iteritems(self.raw_vocab): if keep_vocab_item(word, v, min_count, trim_rule=trim_rule): if (word in self.wv.vocab): pre_exist_words.append(word) pre_exist_total += v if (not dry_run): self.wv.vocab[word].count += v else: new_words.append(word) new_total += v if (not dry_run): self.wv.vocab[word] = Vocab(count=v, index=len(self.wv.index2word)) self.wv.index2word.append(word) else: drop_unique += 1 drop_total += v original_unique_total = ((len(pre_exist_words) + len(new_words)) + drop_unique) pre_exist_unique_pct = ((len(pre_exist_words) * 100) / max(original_unique_total, 1)) new_unique_pct = ((len(new_words) * 100) / max(original_unique_total, 1)) logger.info('New added %i unique words (%i%% of original %i)\n and increased the count of %i pre-existing words (%i%% of original %i)', len(new_words), new_unique_pct, original_unique_total, len(pre_exist_words), pre_exist_unique_pct, original_unique_total) retain_words = (new_words + pre_exist_words) retain_total = (new_total + pre_exist_total) if (not sample): threshold_count = retain_total elif (sample < 1.0): threshold_count = (sample * retain_total) else: threshold_count = int(((sample * (3 + sqrt(5))) / 2)) (downsample_total, downsample_unique) = (0, 0) for w in retain_words: v = self.raw_vocab[w] word_probability = ((sqrt((v / threshold_count)) + 1) * (threshold_count / v)) if (word_probability < 1.0): downsample_unique += 1 downsample_total += (word_probability * v) else: word_probability = 1.0 downsample_total += v if (not dry_run): self.wv.vocab[w].sample_int = int(round((word_probability * (2 ** 32)))) if ((not dry_run) and (not keep_raw_vocab)): logger.info('deleting the raw counts dictionary of %i items', len(self.raw_vocab)) self.raw_vocab = defaultdict(int) logger.info('sample=%g downsamples %i most-common words', sample, downsample_unique) logger.info('downsampling leaves estimated %i word corpus (%.1f%% of prior %i)', downsample_total, ((downsample_total * 100.0) / max(retain_total, 1)), retain_total) report_values = {'drop_unique': drop_unique, 'retain_total': retain_total, 'downsample_unique': downsample_unique, 'downsample_total': int(downsample_total)} report_values['memory'] = self.estimate_memory(vocab_size=len(retain_words)) return report_values
'Build tables and model weights based on final vocabulary settings.'
def finalize_vocab(self, update=False):
if (not self.wv.index2word): self.scale_vocab() if (self.sorted_vocab and (not update)): self.sort_vocab() if self.hs: self.create_binary_tree() if self.negative: self.make_cum_table() if self.null_word: (word, v) = ('\x00', Vocab(count=1, sample_int=0)) v.index = len(self.wv.vocab) self.wv.index2word.append(word) self.wv.vocab[word] = v if (not update): self.reset_weights() else: self.update_weights()
'Sort the vocabulary so the most frequent words have the lowest indexes.'
def sort_vocab(self):
if len(self.wv.syn0): raise RuntimeError('cannot sort vocabulary after model weights already initialized.') self.wv.index2word.sort(key=(lambda word: self.wv.vocab[word].count), reverse=True) for (i, word) in enumerate(self.wv.index2word): self.wv.vocab[word].index = i
'Borrow shareable pre-built structures (like vocab) from the other_model. Useful if testing multiple models in parallel on the same corpus.'
def reset_from(self, other_model):
self.wv.vocab = other_model.wv.vocab self.wv.index2word = other_model.wv.index2word self.cum_table = other_model.cum_table self.corpus_count = other_model.corpus_count self.reset_weights()
'Train a single batch of sentences. Return 2-tuple `(effective word count after ignoring unknown words and sentence length trimming, total word count)`.'
def _do_train_job(self, sentences, alpha, inits):
(work, neu1) = inits tally = 0 if self.sg: tally += train_batch_sg(self, sentences, alpha, work, self.compute_loss) else: tally += train_batch_cbow(self, sentences, alpha, work, neu1, self.compute_loss) return (tally, self._raw_word_count(sentences))
'Return the number of words in a given job.'
def _raw_word_count(self, job):
return sum((len(sentence) for sentence in job))
'Update the model\'s neural weights from a sequence of sentences (can be a once-only generator stream). For Word2Vec, each sentence must be a list of unicode strings. (Subclasses may accept other examples.) To support linear learning-rate decay from (initial) alpha to min_alpha, and accurate progres-percentage logging, either total_examples (count of sentences) or total_words (count of raw words in sentences) MUST be provided. (If the corpus is the same as was provided to `build_vocab()`, the count of examples in that corpus will be available in the model\'s `corpus_count` property.) To avoid common mistakes around the model\'s ability to do multiple training passes itself, an explicit `epochs` argument MUST be provided. In the common and recommended case, where `train()` is only called once, the model\'s cached `iter` value should be supplied as `epochs` value.'
def train(self, sentences, total_examples=None, total_words=None, epochs=None, start_alpha=None, end_alpha=None, word_count=0, queue_factor=2, report_delay=1.0, compute_loss=None):
if self.model_trimmed_post_training: raise RuntimeError('Parameters for training were discarded using model_trimmed_post_training method') if (FAST_VERSION < 0): warnings.warn('C extension not loaded for Word2Vec, training will be slow. Install a C compiler and reinstall gensim for fast training.') self.neg_labels = [] if (self.negative > 0): self.neg_labels = zeros((self.negative + 1)) self.neg_labels[0] = 1.0 if compute_loss: self.compute_loss = compute_loss self.running_training_loss = 0 logger.info('training model with %i workers on %i vocabulary and %i features, using sg=%s hs=%s sample=%s negative=%s window=%s', self.workers, len(self.wv.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative, self.window) if (not self.wv.vocab): raise RuntimeError('you must first build vocabulary before training the model') if (not len(self.wv.syn0)): raise RuntimeError('you must first finalize vocabulary before training the model') if (not hasattr(self, 'corpus_count')): raise ValueError("The number of sentences in the training corpus is missing. Did you load the model via KeyedVectors.load_word2vec_format?Models loaded via load_word2vec_format don't support further training. Instead start with a blank model, scan_vocab on the new corpus, intersect_word2vec_format with the old model, then train.") if ((total_words is None) and (total_examples is None)): raise ValueError('You must specify either total_examples or total_words, for proper alpha and progress calculations. The usual value is total_examples=model.corpus_count.') if (epochs is None): raise ValueError('You must specify an explict epochs count. The usual value is epochs=model.iter.') start_alpha = (start_alpha or self.alpha) end_alpha = (end_alpha or self.min_alpha) job_tally = 0 if (epochs > 1): sentences = utils.RepeatCorpusNTimes(sentences, epochs) total_words = (total_words and (total_words * epochs)) total_examples = (total_examples and (total_examples * epochs)) def worker_loop(): 'Train the model, lifting lists of sentences from the job_queue.' work = matutils.zeros_aligned(self.layer1_size, dtype=REAL) neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) jobs_processed = 0 while True: job = job_queue.get() if (job is None): progress_queue.put(None) break (sentences, alpha) = job (tally, raw_tally) = self._do_train_job(sentences, alpha, (work, neu1)) progress_queue.put((len(sentences), tally, raw_tally)) jobs_processed += 1 logger.debug('worker exiting, processed %i jobs', jobs_processed) def job_producer(): 'Fill jobs queue using the input `sentences` iterator.' (job_batch, batch_size) = ([], 0) (pushed_words, pushed_examples) = (0, 0) next_alpha = start_alpha if (next_alpha > self.min_alpha_yet_reached): logger.warning("Effective 'alpha' higher than previous training cycles") self.min_alpha_yet_reached = next_alpha job_no = 0 for (sent_idx, sentence) in enumerate(sentences): sentence_length = self._raw_word_count([sentence]) if ((batch_size + sentence_length) <= self.batch_words): job_batch.append(sentence) batch_size += sentence_length else: logger.debug('queueing job #%i (%i words, %i sentences) at alpha %.05f', job_no, batch_size, len(job_batch), next_alpha) job_no += 1 job_queue.put((job_batch, next_alpha)) if (end_alpha < next_alpha): if total_examples: pushed_examples += len(job_batch) progress = ((1.0 * pushed_examples) / total_examples) else: pushed_words += self._raw_word_count(job_batch) progress = ((1.0 * pushed_words) / total_words) next_alpha = (start_alpha - ((start_alpha - end_alpha) * progress)) next_alpha = max(end_alpha, next_alpha) (job_batch, batch_size) = ([sentence], sentence_length) if job_batch: logger.debug('queueing job #%i (%i words, %i sentences) at alpha %.05f', job_no, batch_size, len(job_batch), next_alpha) job_no += 1 job_queue.put((job_batch, next_alpha)) if ((job_no == 0) and (self.train_count == 0)): logger.warning('train() called with an empty iterator (if not intended, be sure to provide a corpus that offers restartable iteration = an iterable).') for _ in xrange(self.workers): job_queue.put(None) logger.debug('job loop exiting, total %i jobs', job_no) job_queue = Queue(maxsize=(queue_factor * self.workers)) progress_queue = Queue(maxsize=((queue_factor + 1) * self.workers)) workers = [threading.Thread(target=worker_loop) for _ in xrange(self.workers)] unfinished_worker_count = len(workers) workers.append(threading.Thread(target=job_producer)) for thread in workers: thread.daemon = True thread.start() (example_count, trained_word_count, raw_word_count) = (0, 0, word_count) (start, next_report) = ((default_timer() - 1e-05), 1.0) while (unfinished_worker_count > 0): report = progress_queue.get() if (report is None): unfinished_worker_count -= 1 logger.info('worker thread finished; awaiting finish of %i more threads', unfinished_worker_count) continue (examples, trained_words, raw_words) = report job_tally += 1 example_count += examples trained_word_count += trained_words raw_word_count += raw_words elapsed = (default_timer() - start) if (elapsed >= next_report): if total_examples: logger.info('PROGRESS: at %.2f%% examples, %.0f words/s, in_qsize %i, out_qsize %i', ((100.0 * example_count) / total_examples), (trained_word_count / elapsed), utils.qsize(job_queue), utils.qsize(progress_queue)) else: logger.info('PROGRESS: at %.2f%% words, %.0f words/s, in_qsize %i, out_qsize %i', ((100.0 * raw_word_count) / total_words), (trained_word_count / elapsed), utils.qsize(job_queue), utils.qsize(progress_queue)) next_report = (elapsed + report_delay) elapsed = (default_timer() - start) logger.info('training on %i raw words (%i effective words) took %.1fs, %.0f effective words/s', raw_word_count, trained_word_count, elapsed, (trained_word_count / elapsed)) if (job_tally < (10 * self.workers)): logger.warning("under 10 jobs per worker: consider setting a smaller `batch_words' for smoother alpha decay") if (total_examples and (total_examples != example_count)): logger.warning('supplied example count (%i) did not equal expected count (%i)', example_count, total_examples) if (total_words and (total_words != raw_word_count)): logger.warning('supplied raw word count (%i) did not equal expected count (%i)', raw_word_count, total_words) self.train_count += 1 self.total_train_time += elapsed self.clear_sims() return trained_word_count
'Score the log probability for a sequence of sentences (can be a once-only generator stream). Each sentence must be a list of unicode strings. This does not change the fitted model in any way (see Word2Vec.train() for that). We have currently only implemented score for the hierarchical softmax scheme, so you need to have run word2vec with hs=1 and negative=0 for this to work. Note that you should specify total_sentences; we\'ll run into problems if you ask to score more than this number of sentences but it is inefficient to set the value too high. See the article by [taddy]_ and the gensim demo at [deepir]_ for examples of how to use such scores in document classification. .. [taddy] Taddy, Matt. Document Classification by Inversion of Distributed Language Representations, in Proceedings of the 2015 Conference of the Association of Computational Linguistics. .. [deepir] https://github.com/piskvorky/gensim/blob/develop/docs/notebooks/deepir.ipynb'
def score(self, sentences, total_sentences=int(1000000.0), chunksize=100, queue_factor=2, report_delay=1):
if (FAST_VERSION < 0): warnings.warn('C extension compilation failed, scoring will be slow. Install a C compiler and reinstall gensim for fastness.') logger.info('scoring sentences with %i workers on %i vocabulary and %i features, using sg=%s hs=%s sample=%s and negative=%s', self.workers, len(self.wv.vocab), self.layer1_size, self.sg, self.hs, self.sample, self.negative) if (not self.wv.vocab): raise RuntimeError('you must first build vocabulary before scoring new data') if (not self.hs): raise RuntimeError('We have currently only implemented score for the hierarchical softmax scheme, so you need to have run word2vec with hs=1 and negative=0 for this to work.') def worker_loop(): 'Compute log probability for each sentence, lifting lists of sentences from the jobs queue.' work = zeros(1, dtype=REAL) neu1 = matutils.zeros_aligned(self.layer1_size, dtype=REAL) while True: job = job_queue.get() if (job is None): break ns = 0 for (sentence_id, sentence) in job: if (sentence_id >= total_sentences): break if self.sg: score = score_sentence_sg(self, sentence, work) else: score = score_sentence_cbow(self, sentence, work, neu1) sentence_scores[sentence_id] = score ns += 1 progress_queue.put(ns) (start, next_report) = (default_timer(), 1.0) job_queue = Queue(maxsize=(queue_factor * self.workers)) progress_queue = Queue(maxsize=((queue_factor + 1) * self.workers)) workers = [threading.Thread(target=worker_loop) for _ in xrange(self.workers)] for thread in workers: thread.daemon = True thread.start() sentence_count = 0 sentence_scores = matutils.zeros_aligned(total_sentences, dtype=REAL) push_done = False done_jobs = 0 jobs_source = enumerate(utils.grouper(enumerate(sentences), chunksize)) while True: try: (job_no, items) = next(jobs_source) if (((job_no - 1) * chunksize) > total_sentences): logger.warning('terminating after %i sentences (set higher total_sentences if you want more).', total_sentences) job_no -= 1 raise StopIteration() logger.debug('putting job #%i in the queue', job_no) job_queue.put(items) except StopIteration: logger.info('reached end of input; waiting to finish %i outstanding jobs', ((job_no - done_jobs) + 1)) for _ in xrange(self.workers): job_queue.put(None) push_done = True try: while ((done_jobs < (job_no + 1)) or (not push_done)): ns = progress_queue.get(push_done) sentence_count += ns done_jobs += 1 elapsed = (default_timer() - start) if (elapsed >= next_report): logger.info('PROGRESS: at %.2f%% sentences, %.0f sentences/s', (100.0 * sentence_count), (sentence_count / elapsed)) next_report = (elapsed + report_delay) else: break except Empty: pass elapsed = (default_timer() - start) self.clear_sims() logger.info('scoring %i sentences took %.1fs, %.0f sentences/s', sentence_count, elapsed, (sentence_count / elapsed)) return sentence_scores[:sentence_count]
'Removes all L2-normalized vectors for words from the model. You will have to recompute them using init_sims method.'
def clear_sims(self):
self.wv.syn0norm = None
'Copy all the existing weights, and reset the weights for the newly added vocabulary.'
def update_weights(self):
logger.info('updating layer weights') gained_vocab = (len(self.wv.vocab) - len(self.wv.syn0)) newsyn0 = empty((gained_vocab, self.vector_size), dtype=REAL) for i in xrange(len(self.wv.syn0), len(self.wv.vocab)): newsyn0[(i - len(self.wv.syn0))] = self.seeded_vector((self.wv.index2word[i] + str(self.seed))) if (not len(self.wv.syn0)): raise RuntimeError('You cannot do an online vocabulary-update of a model which has no prior vocabulary. First build the vocabulary of your model with a corpus before doing an online update.') self.wv.syn0 = vstack([self.wv.syn0, newsyn0]) if self.hs: self.syn1 = vstack([self.syn1, zeros((gained_vocab, self.layer1_size), dtype=REAL)]) if self.negative: self.syn1neg = vstack([self.syn1neg, zeros((gained_vocab, self.layer1_size), dtype=REAL)]) self.wv.syn0norm = None self.syn0_lockf = ones(len(self.wv.vocab), dtype=REAL)